SlideShare a Scribd company logo
1 of 85
1
Menoufia University
Faculty of Computers and Information
Information Technology Department
FACE LOGIN : A ROBUST FACE IDENTIFICATION
SYSTEM FOR SECURITY-BASED SERVICE
BY
Ahmed Fawzy Gad
SUPERVISOR
Dr. Noura Abd El-Moez Semary
July 2015
‫المنوفية‬ ‫جامعة‬
2
Table Of Content (TOC)
Acknowledgment--------------------------------------------------------------------------------------------------5
Dedication----------------------------------------------------------------------------------------------------------6
Abstract-------------------------------------------------------------------------------------------------------------7
Sybmols and Abbreviations--------------------------------------------------------------------------------------8
List Of Tables------------------------------------------------------------------------------------------------------9
List Of Figures-----------------------------------------------------------------------------------------------------10
Chapter 1 - Introduction-----------------------------------------------------------------------13
1.1. Overview about face recogniton systems
1.2. Face Recognition Applications
1.3. Face Recognition Stages
1.3.1. Face Identification
1.3.2. Face Authentication
1.4. Face Recognition Major Classes
1.4.1. Holistic Face Recognition
1.4.2. Local Features Face Recogntion
Chapter 2 - Background / Related Work---------------------------------------------------22
2.1.YCbCr Color Space Overview
2.2.Face Component Extraction Using YCbCr
2.2.1. Face Skin Model Detection
2.2.2. Face Cropping Process on Normal Static Image
2.2.3. Extraction Process and Measurement of Distances between Face Compo-
nents
2.2.4. Face Division
2.2.5. Face Component Detection and Extraction
2.3.Face Recognition Based On Eigenfaces
2.3.1. The eigenface and the second-order eigenface method
2.3.2. PCA mixture model and EM learning
2.3.3. The second-order mixture-of-eigenfaces method
2.4.HSV Color Space Overview
2.5.Skin Detection Using HSV
2.6.RGB Color Space Overview
2.7.Skin Detection Using RGB
Chapter 3 - System Analysis------------------------------------------------------------------38
3.1.System Analysis Overview
3.1.1 What is System
3.1.2 System Life Cycle
3.2.Phases of system development life cycle
3.2.1 Preliminary System Study
3.2.2 Feasibility study
3.2.3 Detailed system study
3.2.4 System analysis
3
3.2.5 System design
3.2.5.1 Preliminary or General Design
3.2.5.2 Structured or Detailed Design
3.2.6 Coding
3.2.7 Testing
3.2.7.1 Program test
3.2.7.2 System test
3.2.8 Implementation
3.2.8.1 Changeover
3.2.8.1.1 Direct Changeover
3.2.8.1.2 Parallel Run
3.2.8.1.3 Pilot run
3.2.9 Maintenance
3.2.10 Documentation
3.2.10.1 User or Operator Documentation
3.2.10.2 System Documentation
3.3.Data Flow Diagram (DFD)
3.3.1 Context Diagram
3.3.1 Level 0 Diagram
3.4.Entity Relationship Diagram (ERD)
3.5.Unified Modeling Language (UML)
3.5.1 Use Case Diagram
3.5.1 Class Diagram
3.5.1 Sequence Diagram
3.6.System Overview
3.7.System Phases
3.7.1 Image Capture
3.7.2 Image Preprocessing
3.7.3 Skin and Face Detection
3.7.3.1 Color Space Selection
3.7.3.2 RGB-H-CbCr Color Space
3.7.3.3 Skin Colour Subspace Analysis
3.7.3.4 Skin Colour Bounding Rules
3.7.3.5 Morphological Operations
3.7.3.6 Skin Detection Results
3.7.3.7 Detecting Face Region
3.7.4 Face Features Extraction
3.7.5 Face Features Detection Enhancement
3.7.5.1 Eye Detection Enhancement
3.7.5.2 Nose Detection Enhancement
3.7.5.3 Mouth Detection Enhancement
3.7.5.4 Results After Enhancement
3.7.6 Measuring Distance Between Face Components
Chapter 4 - System Tools-----------------------------------------------------------------------71
4.1.Desktop Application
4.1.1 MATLAB
4
4.1.2 Java
4.1.3 MySQL
4.1.4 JDeveloped Studio
4.2.Mobile Application
4.2.1 Android
4.2.2 XML
4.2.3 SQLite
4.2.4 Eclipse IDE
4.3.Web Site
4.3.1 HTML
4.3.2 CSS
4.3.3 PHP
4.3.4 JavaScript
4.3.5 JetBrains PHPStorm
Chapter 5 - Results------------------------------------------------------------------------------80
Chapter 6 - Conclusion & Future Work----------------------------------------------------82
Chapter 7 - References-------------------------------------------------------------------------83
5
Acknowledgement
Full thanks to supervisor of this project Dr.Noura for the full support of producing
this work and sharing experience.
6
Dedication
This work is dedicated to every Egyptian wants to raise this country at high ranks.
7
Abstract
Human face is one of the most important representative part of humans that has a wide range of appli-
cations. Biometrics is the emerging area of bioengineering that it is concerned with the automated
method of recognizing person based on a physiological or behavioral characteristic.
Using the face an identification system can diffrentiate among persons by just a simple image. The pro-
posed system uses Image Processing and Patteren Recognition techniques that allow the detection and
identifcation of the applied face image with high accuracy and less computation complexity.
The major techniques used is to enhance the quality of the applied image using some preprocessing al-
gorithms such as Retinex algorithm then detecting human skin color using different combined color
spaces such as RGB, HSV, YCbCr.
Skin detection accuracy using RGB-H-CbCr reaches 97%. Then using some extracted features from the
face its components can be realized i.e., nose, mouth, eyes, chin can be detected using Viola~Jones al-
gorithm and Frangi filter.
The last major phase is to extract features based on distances between centers of the detected objects to
compare with other face images` features to make the identifcation decision. Extracted feature vector
containg 11 distances.
Two large face database are used. The Center for Vital Longevity Face and VidTIMIT Audio-Video
databses that contains a number of images with different expressions to allow functionality of the sys-
tem under different cases.
The proposed system has high accuracy over the listed databases that reaches 98%. A full MATLAB-
based system is created that apply this system. Integration between MATLAB, Java, Android is used to
create a distributable desktop and mobile application that can be used as a security system to login users
by their face images rather than using text passwords due to its complexity. Also a web based service is
provied that allow sites to use this system to login to their system.
The proposed system can also be used in various areas such as detecting criminals and malicious users,
enhancing the security by combining it with the surveillance cameras to enable recognizing human fac-
es directly, helping families to find lost childrens by using their images to search via the web site, au-
tomatic alert if some VIP enters a public organization.
8
Symbols and Abbreviations
Symbol # Symbol Abbreviation
1 AFR Automatic Face Recognition
2 PCA Principle Component Analysis
3 EBGM Elastic Bunch Graph Matching
4 RGB Red; Green; Blue
5 HSV Hue; Saturation; Value
6 YCbCr Luminance; Chroma: Blue; Chroma: Red
7 DFD Data Flow Diagram
7 UML Unified Modeling Language
8 ERD Entity Relationship Diagram
9
List Of Tables
Table # Caption Page
1 Skin Detection Accuracy By Different Color Models 62
2
ACCURACY OF PROPOSED FRAMEWORK BASED ON THE CENTER FOR VITAL
LONGEVITY DATABASE
81
3
ACCURACY OF PROPOSED FRAMEWORK BASED ON THE VIDTIMITAUDIO-
VIDEO DATABASE
81
10
List Of Figures
Figure # Caption Page
1 Face Recognition Challenges 10
2 Face Illumination Problem 14
3 Pose Problem 15
4 Error rate of face recognition from 1993 to 2006 17
5 Captured face image is compared with a set of images 17
6 Face identification System 18
7 Presenting Identity to users 18
8 Normal face, average face from the AR Face database, and normalized face 19
9 Graph Matching 20
10 Converting RGB to YCbCr 22
11 Visualization of YCbCr in terms of its components 22
12 Face Components Extraction Steps 23
13 Face Detection Stages 23
14 Face Region Detected 24
15 Face Region Division Model 25
16 Eye Map Formulation 25
17 Mouth Map Formulation 25
18 Face Components After Face Division Process 26
19 A procedure of processing face images in the second-order eigenface method 27
20 An illustration of the PCA mixture model 28
21 An iterative EM learning algorithm 28
22 Examples of face image reconstructions 29
23 Binary classifier to segment color image pixels into skin and non-skin 30
24 HSV Color Model 31
25 Skin detection using HSV 32
26 Skin detection scheme using HSV 32
27 Skin after morphological operations and filtering 32
28 RGB Color Model 33
29 An annotation process for skin and non-skin ground truth information 34
30 Transformation of RGB from 3D into 2D matrix 35
11
31 RGB skin color rules 35
32 Histogram of (R-G)/(R+G) and B/(R+G) respectively 36
33 RGB skin color rules based on color histogram 36
34 Examples of skin color classification 37
35 Basic System Components 38
36 Phases of System Development Life Cycle 39
37 DFD Context Diagram 44
38 DFD Level 0 Diagram 45
39 ERD Diagram 46
40 UML Use Case Diagram 47
41 UML Class Diagram 48
42 UML Sequence Diagram 49
43 System Flow Diagram 50
44 Retinex Algorithm Results 51
45 Skin Detection Example 51
46 Examples of input image to the system 52
47 Illumination effect 53
48 Before and after applying histogram equalization over an image 54
49 Applying RETINEX over a degraded color images 55
50 Density plots of Asian skin in different color spaces 57
51 Density plots of Asian, African and Caucasian skin in different color spaces 57
52 System overview for face detecting using skin color 58
53 H-V and H-S subspace plots 59
54 Distribution of the H (Hue) channel 59
55 Distribution of Y, Cb and Cr respectively 60
56 Detected skin after morphological operations 61
57 Skin detection accuracy by different color spaces 62
58 Skin Color Detection 62
59 False alarms in face detection 63
60 Face Features Detection 64
61 Frangi Filter Result 65
62 Eye Pupil Detection 65
63 Simple Eye Diagram 66
12
64 Sclera Detection Example 66
65 Eye Detection Example 66
66 Enhanced Nose Detection 67
67 Enhanced Mouth Detection 68
68 Some faces annotated with bounding box over eyes, nose and mouth 69
69 Cropped face showing distances between each 2 face components 70
70 Diagram showing how to measure distance between left eye center and mouth center 70
71 MATLAB Screen 71
72 Java Application Screen 73
73 JDeveloper Studio Screen 74
74 Eclipse IDE Screen 76
75 Identification Experiments 81
76 Detecting neck as part of the face 82
77 Identical twins 83
13
Chapter 1- Introduction
1.1 Face Recognition Overview
A new opportunity for the application of statistical methods is driven by growing interest in biometric
performance evaluation. Methods for performance evaluation seek to identify, compare and interpret
how characteristics of subjects, the environment and images are associated with the performance of
recognition algorithms.
Biometrics is the emerging area of bioengineering; it is the automated method of recognizing person
based on a physiological or behavioral characteristic. There exist several biometric systems such as sig-
nature, finger prints, voice, iris, retina, hand geometry, ear geometry, and face. Among these systems,
facial recognition appears to be one of the most universal, collectable, and accessible systems.
The field of biometric face recognition blends methods from computer science, engineering and statis-
tics, however statistical reasoning has been applied predominantly in the design of recognition algo-
rithms.
Biometric face recognition, otherwise known as Automatic Face Recognition (AFR), is a particularly
attractive biometric approach, since it focuses on the same identifier that humans use primarily to dis-
tinguish one person from another: their “faces”.
One of its main goals is the understanding of the complex human visual system and the knowledge of
how humans represent faces in order to discriminate different identities with high accuracy.
Face recognition is concerned with identifying individuals from a collection of face images. Face
recognition pertains to a vast range of biometric approaches including fingerprint, iris/retina and voice
recognition. Overall, biometric approaches are concerned with identifying individuals by their unique
physical characteristics.
Traditionally, the use of passwords and Personal Identification Numbers have been employed to formal-
ly identify individuals but the disadvantages of such methods are that someone else may use them or
they can be easily forgotten.
Given these problems, the development of biometrics approaches such as face recognition, fingerprint,
iris/retina and voice recognition provides a far superior solution when identifying individuals, because
not only does it uniquely identify individuals, but it also minimizes the risk of someone else using an-
other person’s identity.
However, a disadvantage of fingerprint, iris/retina and voice recognition is they require active coopera-
tion from individuals. For example, fingerprint recognition requires participants to press their fingers
onto a fingerprint reading device, iris/retina recognition requires participants to actively stand in front
of a iris/retina scanning device, or, voice recognition requires participants to actively speak into a mi-
crophone device.
Therefore, face recognition is considered a better approach to other biometrics because it is versatile in
the sense that individuals are identified actively, by standing in front of a face scanner, or passively, as
they walk past a face scanner.
There are also disadvantages of using face recognition. Faces are highly dynamic and can vary consid-
erably in their orientation, lighting, scale and facial expression, therefore face recognition is considered
a difficult problem to solve.
Given these problems, many researchers from a range of disciplines including pattern recognition,
computer vision and artificial intelligence have proposed many solutions to minimize such difficulties
14
in addition to improving the robustness and accuracy of such approaches. Among those issues, the fol-
lowing are prominent for most systems: the illumination problem, the pose problem, scale variability,
images taken years apart, glasses, moustaches, beards, low quality image acquisition, partially occluded
faces etc.
Figure 1. Face Recognition Challenges
The illumination problem in next figure, where the same face appears differently due to the change in
lighting. More specifically, the changes induced by illumination could be larger than the differences be-
tween individuals, causing systems based on comparing images to misclassify the identity of the input
image.
15
Figure 2. Face Illumination Problem
The pose problem in next figure where the same face appears differently due to changes in viewing
condition. The pose problem has been divided into three categories
1. The simple case with small rotation angles
2. The most commonly addressed case, when there is a set of training image pairs (frontal and
rotated images).
3. The most difficult case, when training image pairs are not available and illumination varia-
tions are present.
Figure 3. Pose Problem
16
Other challenges in face recognition include scale variability when an the face images are taken but
with different scale that affect the results, when the applied person has moustaches, if images are taken
with a low quality image acquisition that results in different colors than the original, detection of the
face in both color and grayscale images, also a major problem is the partially occluded faces where part
of the face is hidden due to wearing glasses, hat or other effects.
Face recognition has far reaching benefits to corporations, the government and the greater society. The
application of face recognition to corporations include access to computers, secure networks and video
conferencing; access to office buildings and restricted sections of these buildings; access to storage ar-
chives, or, identifying members at conferences and annual general meetings.
Specific corporate applications include access and authorization to operate machinery, clocking on and
clocking off when beginning and finishing work, assignment of work responsibilities and accountability
based on identity, monitoring employees, or, confirming the identity of clients, suppliers and transport
and logistics companies when they send and receive packages. Additionally, sales, marketing and adver-
tising companies could identify their customers in conjunction with customer relationship management
software. Application of face recognition to state and federal governments may include, access to par-
liamentary buildings, press conferences and access to secure confidential government documents and
reports and doctrines. Specific government use of face recognition can include, Australian Customs ver-
ifying the identity of individuals to their passport files and documents, or, state and federal police using
face recognition to improve crime prevention and facilitate police activities. Application of face recog-
nition to the greater society may include election voting registration, access to venues and functions,
verifying the identity of driver’s to their issued licenses and personal identification cards, confirming
identity for point of sales transactions like credit cards and confirming identity when accessing funds
from an automatic teller machine. Other applications of face recognition include facilitating home secu-
rity, or, gaining access to motor vehicles.
1.2 Face Recognition Applications
There are a large number of applications of face recognition:
Easy people tagging
Facebook’s automatic tag suggestion feature, which used face recognition to suggest people that might
want to tag in different photos, got people hot under the collar earlier this year. Face recognition for
people tagging certainly saves time. It’s currently available in Apple’s iPhoto, Google’s Picasa and on
Facebook.
Gaming
Image and face recognition is bringing a whole new dimension to gaming. Microsoft’s Kinect’s ad-
vanced motion sensing capabilities have given the Xbox 360 a whole new lease of life and opened up
gaming to new audiences by completely doing away with hardware controllers.
Security
Face recognition could one day replace password logins on our favorite apps – imagine logging in to
Twitter with the face.
Marketing
17
Face recognition is gaining the interest of marketers. A webcam can be integrated into a television and
detect any face that walks by. The system then calculates the race, gender, and age range of the face.
Once the information is collected, a series of advertisements can be played that is specific toward the
detected race/gender/age.
Due to the high importance of face recognition as a research field, the error rate has decreased suddenly
as shown in the following figure:
Figure 4. Error rate of face recognition from 1993 to 2006
The face recognition problem can be divided into two main stages
1. Face verification (or authentication).
2. Face identification (or recognition).
Using a simple camera, an image is detected that includes identifying and locating a face in an image.
The recognition stage is the second stage; it includes feature extraction, where important information
for discrimination is saved, and the matching, where the recognition result is given with the aid of a
face database.
Figure 5. Captured face image is compared with a set of images
Identification and authentication are two terms that describe the major phases of the process of face
recognition system. The terms are often used synonymously, but authentication is typically a more in-
volved process than identification. Identification is what happens when one profess to have a certain
identity in the system, while authentication is what happens when the system determines that you are
who you claim to be. Both processes are usually used in tandem, with identification taking place before
authorization, but they can stand alone, depending on the nuances of the system.
18
1.3.1 Face Identification
Identification is the process of presenting an identity to a system. It is done in the initial stages of gain-
ing access to the system and is what happens when one claim to be a particular system user. The claim
can take the form of providing your username during the login process; placing your finger on a scan-
ner; giving your name on a guest list or any other format in which you claim an identity with the aim of
gaining access.
As in next figure, before users try to enter a system or a building, a camera is used to capture their face
images, then using a set of test images the face is identified.
Figure 6. Face identification System
Identification is not necessary for some systems, such as ATM cards, where anyone with the correct
code can gain access to your account without identifying themselves.
Authentication is the process of validating an identity provided to a system. This entails checking the
validity of the identity prior to the authorization phase.
1.3.2 Face Authentication
The process of checking the validity of the evidence provided to support the claimed identity must be
sufficiently robust to detect impostors. Authentication usually occurs after identification is complete,
such as when you supply a password to support a username during the login process. It can happen,
however, at the same time as the identification process.
In next figure, the captured images are applied to the system and then using some processing the identi-
fied persons information such as name, age are applied to the viewer.
19
Figure 7. Presenting Identity to users
Identification and authentication are not easily distinguished, especially when both occur in one trans-
action. They may appear synonymous, but they are two different processes.
The primary difference between them is that identification relates to the provision of an identity, while
authentication relates to the checks made to ensure the validity of a claimed identity. Simply put, the
identification process involves making a claim to an identity, whereas the authentication process in-
volves proving that identity.
Identification occurs when you type your username into a login screen, because you have claimed to be
that person, while authentication occurs after you have typed in a password and hit the “login” button,
at which time the validity your claim to the username is determined. Some common authentication
methods include smartcards, biometrics, RSA tokens and passwords, while common identification
methods are usernames and smartcards.
Recognition systems have been divided into two major classes
1. Holistic methods.
2. Local feature-based methods.
1.4.1 Holistic Face Recognition
In holistic approaches, the whole face image is used as the raw input to the recognition system. An ex-
ample is the well-known PCA-based technique introduced by Kirby and Sirovich, followed by Turk and
Pentland. Holistic processing is generally accepted to be unique to faces and provides strong support for
the notion that faces are processed differently relative to all other object categories.
Features found from holistic approaches represent the optimal variances of pixel data in face images
that are used to uniquely identify one individual to another.
Holistic face recognition utilizes global information from faces to perform face recognition. The global
information from faces is fundamentally represented by a small number of features which are directly
derived from the pixel information of face images. These small number of features distinctly capture
the variance among different individual faces and therefore are used to uniquely identify individuals.
Next figure shows an example of such method used to find the average face as a global features.
Figure 8. Normal face, average face from the AR Face database, and normalized face, which
is difference between the normal face and the average face.
For each face in the database of M faces, the average face feature is calculated by this equation:
20
Once the average face is found, the normalized face is calculated by subtracting the average face for the
whole dataset and each individual face
1.4.2 Local Features Face Recognition
Local features are extracted, such as eyes, nose and mouth. Their locations and local statistics (appear-
ance) are the input to the recognition stage. An example of this method is Elastic Bunch Graph Match-
ing (EBGM).
Feature based face recognition uses a priori information or local features of faces to select a number of
features to uniquely identify individuals. Local features include the eyes, nose, mouth, chin and head
outline, which are selected from face images where these features are used to uniquely identify individ-
uals.
One of the local features used is the graph matching as shown in next figure.
Figure 9. Graph Matching
Elastic Bunch Graph Matching recognizes faces by matching the probe set represented as the input face
graphs, to the gallery set that is represented as the model face graph. Fundamental to the Elastic Bunch
Graph Matching is the concept of nodes.
Essentially, each node of the input face graph is represented by a specific feature point of the face. For
example, a node represents an eye and another node represents the nose and the concept continues for
representing the other face features
The model face graph represents the gallery set only uses one model face graph to represent the entire
gallery set. However, the model face graph can be conceptually thought of as a number of input face
graphs stacked on top of each other and concatenated to form one model face graph, with the exception
that this is applied to the gallery set instead of the probe set. Therefore, this would allow the grouping
of the same types of face features from different individuals.
For example, the eyes of different individuals could be grouped together to form the eye feature point
for the model face graph and the noses of different individuals can be grouped together to form the nose
feature point for the model face graph.
Given the definition for the input face graph and model face graph, to determine the identity for the in-
put face graph is to achieve the smallest distance in relation to the the model face graph for a particular
gallery face.
21
22
Chapter 2 - Background / Related Work
Identification of faces was work field of some researchers in previous years because it can facilitate the
recognition process, and this problem was addressed by several methods. Simple review about some
solutions are shown to readers in order to have a good background about this topic.
2.1 YCbCr Color Space Overview
The YCbCr color space is widely used for digital video. In this format, luminance information is stored
as a single component (Y), and chrominance information is stored as two color-difference components
(Cb and Cr). Cb represents the difference between the blue component and a reference value. Cr repre-
sents the difference between the red component and a reference value.
YCbCr is sometimes abbreviated to YCC. Y′CbCr is often called YPbPr when used for analog compo-
nent video, although the term Y′CbCr is commonly used for both systems, with or without the prime.
Y′CbCr is not an absolute color space;rather, it is a way of encoding RGB information. The actual color
displayed depends on the actual RGB primaries used to display the signal. Therefore a value expressed
as Y′CbCr is predictable only if standard RGB primary chromaticities are used.
Figure 10. Converting RGB to YCbCr
Individual color components of YCbCr color space are luma Y, chroma Cb and chroma Cr. Chroma Cb
corresponds to the U color component, and chroma Cr corresponds to the V component of a general
YUV color space.
The next figure shows a visualization of YCbCr color space
Figure 11. Visualization of YCbCr in terms of its components
2.2 Face Components Extraction Based On YCbCR [1]
Principally, the research is conducted in automated steps illustrated in next figure. The first step
is face detection based on skin color model. The result is then cropped to normalize the face re-
23
gion. Next, extraction of eyes, nose, and mouth components is conducted, and distances between
them measured.
Figure 12. Face Components Extraction Steps
2.2.1 Face Skin Model Detection
Ninety skin samples of Indonesian faces are used. The extraction step is conducted by decreasing
the luminance level to reduce lighting effects, such that the original image is obtained. Decreasing
the luminance level is conducted by image conversion from RGB to YCbCr or chromatic color. After
the Cb and Cr values are obtained then the low pass filter is conducted to the image in order to reduce
noise. The reshape function is next applied to Cb and Cr values which turn them into row vectors.
Face detection process begins with skin model detection process by applying the threshold value
to get the binary image as shown in the following figure:
Figure 13. Face Detection Stages
2.2.2 Face Cropping Process on Normal Static Image
The binary image obtained from threshold process is further processed to take and crop the face part
of the image. The face image is the part in white color (pixel value = 1). These processes are
as the following steps:
1. Separating the skin part of the face from those of non face part, such as arms, hands, shoulders.
2. Determining the hole area of the picture which indicates the face region. The face region is de-
tected by the following equation
Where E is Euler number, C is related component number and H is hole number in the region.
By using this equatio
region of the face.
3. Finding the statistic of color value between the hole area of the picture (which indicates the face
area) and the face template picture after the hole that represents the face region has been deter-
mined. The following equations are used to find the center of mass in determining the face part
position of the picture:
24
Figure 14. Face Region Detected
2.2.3 Extraction Process and Measurement of Distances between Face
Components
The face region image as a result of face detection process is further processed to obtain the
face components and the distances between them. This is conducted by extracting the eyes,
nose, and mouth components. The extraction determines the components’ locations, and is done on
YCbCr color space to separate the luminance and chrominance components in order to reduce the
lighting effect. Distances measured are between:
7. Nose height
8. Nose width
The face extraction process in this research is conducted in three stages:
1. Face division.
2. Face components detection and extraction.
3. Measurement/calculation of distances between face components.
The face image from which the components will be extracted is first processed by dividing it into re-
gions, in order to narrow down the area for detection. The extraction result can then be expected
to be more accurate. The division also minimizes the probability of other components be detected.
Detection is conducted by computing the components of color space in regions assumed to be the
locations of face components. These are extracted to determine the location of the components. The
process of face components extraction is conducted next.
2.2.4 Face Division
The face is divided into three parts: face, eyes, and mouth regions. The face image to be di-
vided must have forehead and chin regions minimum, and neck region maximum. Some improve-
ments are done on the mouth region to get better result from the previous research. The former re-
search divided the mouth region as illustrated in next figure. An approximate position of the
25
mouth is determined as the center of the region, vertically and horizontally. There exists a neck part
that affects the mouth component position in the mouth region as the mouth component in is
not always located vertically at the center of the region as is illustrated in next figure.
Figure 15. Face Region Division Model
2.2.5 Face Component Detection and Extraction
After the face is divided into regions, its components will be extracted.
1. Eyes extraction is done by forming an eye map as shown in the following figure.
Figure 16. Eye Map Formulation
2. Mouth extraction is done by forming a mouth map as shown in the following figure:
Figure 17. Mouth Map Formulation
Based on the detected mouth and eyes locations, the mouth region is detected.
After the whole extraction process is completed then each face component is surrounded by a
26
bounding box and distances between components of the face are calculated as shown in the following
figure. The face distances are obtained by calculating the difference between every point’s coordi-
nates if there exists a perfect vertical=horizontal lines connecting those points. Otherwise, the Py-
thagoras theorem is used, since additional lines can be drawn from the coordinates to form a right
triangle. Face component distance is the diagonal side of the triangle. The value is rounded to the
nearest integer.
Figure 18. Face Components After Face Division Process
2.3 Face Recognition Based On Eigenfaces [2]
The eigenface method is a well-known template matching approach. Face recognition using the eigen-
face method is performed using feature values that are projected by one eigenface set obtained from
principal component analysis (PCA). In addition, the eigenface method can be used for face coding
(face reconstruction), which is the technology of extracting a small face code in order to reconstruct
face images. The eigenface method uses the PCA having the property of optimal reconstruction.
However, the single eigenface set is not enough to represent complicated face images with large varia-
tions of poses and/or illuminations, and it is often not effective for PCA to be used for analyzing a non-
linear structure such as face images, because PCA is inherently a linear method. To overcome this
weakness, a mixture-of-eigenfaces method that uses a mixture of multiple eigenface sets obtained from
the PCA mixture model for an effective representation of face images. The proposed method has been
motivated by the idea of the PCA mixture model that the classification performance can be improved by
modeling each class into a mixture of several components and by performing the classification in the
compact and decorrelated feature space.
2.3.1 The eigenface and the second-order eigenface method
PCA is a well-known technique of multivariate linear data analysis. The central idea of PCA is to re-
duce the dimensionally of a data set while retaining the variations in the data set as much as possible. In
PCA, a set of the N-dimensional observation vector X is reduced to a set of the N 1 -dimensional fea-
ture vector Y by a transformation matrix U.
Because a face image can be effectively reconstructed from the eigenfaces. However, in some condi-
tions, the set of eigenfaces do not represent the faces well. For example, under various lighting condi-
tions, the initial principal components in the set of eigenfaces are mainly used to reflect the lighting fac-
tors in the face image. In this situation, only using the set of eigenfaces is not effective for representing
27
the face images.
To overcome this limitation, a second-order eigenface method that use not only the set of eigenfaces for
the original face image but also the set of the second-order eigenfaces of the residual face images that
are defined by the differences between the original face images and the reconstructed images obtained
from the set of eigenfaces.
Figure 19. A procedure of processing face images in the second-order eigenface method
2.3.2 PCA mixture model and EM learning
Both the eigenface method and the second-order eigenface method use only one set of eigenfaces.
However, it is often not enough to represent the face images with large variations of poses and/or illu-
minations by the set of eigenfaces.
A second-order mixture-of-eigenfaces method is used that combines the second-order eigenface method
and the mixture-of-eigenfaces method. It provides a couple of mixtures of multiple eigenface sets.
The PCA mixture model is used to estimate a density function. Basically, the central idea comes from
the combination of mixture models and PCA.
In a mixture model, a class is partitioned into K clusters and a density function of the N-dimensional
observation vector X is represented by a linear combination of component densities of K partitioned
clusters by the following equation:
The following figure illustrates a PCA mixture model where the number of mixture components K = 2,
the dimension of feature vectors N = 2, the line segments in each cluster represent the two column vec-
tors U1 and U2 , and the intersection of two line segments represents a mean vector m.
To use the PCA mixture model for estimating the data distribution, Both the appropriate partitioning of
the class and the estimation of model parameters of the partitioned clusters should be performed. This
task can be performed successfully due to the important property of the mixture model that, for many
choices of component density function, they can approximate any continuous density to arbitrary accu-
racy if the model has a sufficiently large number of components and the parameters of the model are
chosen appropriately.
28
Figure 20. An illustration of the PCA mixture model
2.3.3 The second-order mixture-of-eigenfaces method
The K partitioned clusters through the EM learning as shown in next figure over the whole face images,
where each cluster is represented by the independent component’s parameters. To represent the face im-
ages accurately, the second-order eigenface method can be applied to each cluster independently. This is
called the second-order Mixture-of-eigenfaces method in that the face images are represented by a mix-
ture of several components and each partitioned component is represented by a couple of approximate
and residual eigenface sets.
Figure 21. An iterative EM learning algorithm
29
Next figure shows some examples of several reconstructed face images in the proposed second-order
mixture-of-eigenfaces method where the face images in the (a)–(d) rows correspond the original imag-
es, the second-order reconstructed images, the first-order approximate images, and the residual images,
respectively.
Figure 22. Examples of face image reconstructions
2.4 Skin Detection
Skin detection in images is a theme that is present in many applications. This is the first step for
faces recognition, for example. Other application is for naked detection, in the Internet. This work
presents a system for automatic skin detection.
Skin color detection is frequently been used for searching people, face detection, pornographic filtering
and hand tracking. The presence of skin or non-skin in digital image can be determined by manipulat-
ing pixels’ color and/or pixels’ texture. The main problem in skin color detection is to represent the
skin color distribution model that is invariant or least sensitive to changes in illumination condition.
Another problem comes from the fact that many objects in the real world may possess almost
similar skin-tone color such as wood, leather, skin-colored clothing, hair and sand. Moreover, skin
color is different between races and can be different from a person to another, even with people
of the same ethnicity. Finally, skin color will appear a little different when different types of camera
are used to capture the object or scene.
Skin colour is produced by a combination of melanin, haemoglobin, carotene, and bilirubin.
Haemoglobin gives blood a reddish color or bluish color while carotene and bilirubin give skin a yel-
lowish appearance. The amount of melanin makes skin appear darker. Due to its vast application in
many areas, skin color detection research is becoming increasingly popular among the computer
vision research community. Today, skin color detection is often used as preprocessing in some appli-
cations such as face detection, pornographic image detection, hand gesture analysis, people detec-
tion, content-based information retrieval.
The skin color fills only a small fraction from the whole color model and thus, any frequent appearance
in an image could be a clue to human presence. A skin color classifier defines a decision boundary of
30
the skin color pixels in the selected color model based on database of skin-colored pixels. This classifi-
er can be created using different techniques such as k-means, Bayesian, maximum entropy, neural net-
work and others.
Figure 23. Binary classifier to segment color image pixels into skin and non-skin
Skin color provides computationally effective, robust information against rotations, scaling, and partial
occlusions. Skin color can also be used as complimentary information to other features such as shape,
texture, and geometry.
Detecting skin-colored pixels, although it seems as a straightforward and easy task, but it has been
proven to be quite challenging for many reasons. This is because the appearance of a skin color in an
image depends on the illumination conditions where the image was captured.
Therefore, a major challenge in skin color detection is to represent the skin color distribution model
that is invariant or least sensitive to changes in illumination condition.
In addition, the choice of color model used for skin color detection modeling could significantly affects
the performance of any skin color distribution methods.
Another challenge comes from the fact that many objects in the real world may have almost similar
skin-tone color such as wood, leather, skin-colored clothing, hair, sand, etc.
Moreover, skin color is different between human races and can be different from a person to another,
even with people of the same ethnicity.
Finally, skin color will appear a little different when different types of camera are used to capture the
object or scene.
The main problem of skin color detection is to develop a skin color detection algorithm or
classifier that is robust to the large variations in color appearance. Some objects may have al-
most similar skin-tone color which easily confused with skin color. A skin color can be vary in ap-
pearance base on changes in background color, illumination, and location of light sources, and
other objects within the scene may cast shadows or reflect additional light.
There are no specific methods or techniques that have been proposed to robust skin color detec-
tion arise under varying lighting conditions, especially when the illumination color changes. This con-
31
dition may occur in both out-door and in-door environments with mixture of day light and artificial
light.
Many non-skin color objects are overlapping with skin color, and most of pixel-based method pro-
posed in the literature cannot solve this problem. This problem is difficult to be solved because
skin-like materials are those objects that appear to be skin-colored under a certain illumination
condition.
2.4.1 HSV Color Space Overview
Hue, Saturation, Value or HSV is a color model that describes colors in terms of their shade (saturation
or amount of gray) and their brightness (value or luminance).
The HSV color wheel may be depicted as a cone or cylinder as shown in the following figure:
Figure 24. HSV Color Model
The hue (H) of a color refers to which pure color it resembles. All tints, tones and shades of red have
the same hue.
Hues are described by a number that specifies the position of the corresponding pure color on the color
wheel, as a fraction between 0 and 1. Value 0 refers to red; 1/6 is yellow; 1/3 is green; and so forth
around the color wheel.
The saturation (S) of a color describes how white the color is. A pure red is fully saturated, with a satu-
ration of 1; tints of red have saturations less than 1; and white has a saturation of 0.
The value (V) of a color, also called its lightness, describes how dark the color is. A value of 0 is black,
with increasing lightness moving away from black.
The outer edge of the top of the cone is the color wheel, with all the pure colors. The H parameter de-
scribes the angle around the wheel.
The S (saturation) is zero for any color on the axis of the cone; the center of the top circle is white. An
increase in the value of S corresponds to a movement away from the axis.
The V (value or lightness) is zero for black. An increase in the value of V corresponds to a movement
away from black and toward the top of the cone.
The HSV color space is quite similar to the way in which humans perceive color.
HSV separates luma, or the image intensity, from chroma or the color information. This is very useful
in many applications.
2.4.2 Skin Detection using HSV [3]
First, the image in RGB was converted to HSV color space, because it is more related to human col-
or perception . The skin in channel H is characterized by values between 0 and 50, in the channel
32
S from 0.23 to 0.68. But the used component to segment skin pixels is the channel H with values rang-
ing between 6 and 38 and a mix of morphological and smooth filters.
Figure 25. Skin detection using HSV
But the resulting image has many noises in the classification of pixels like skin and non-skin. Next step
minimize these noises, using a 5x5 structuring element in morphological filters. The structuring el-
ement with a dilatation filter that expands the areas in the skin regions. After that the same structur-
ing element was used to erode the image and reduce all the imperfections that the dilatation
created. These techniques were used, by approximation, to fill all the spaces that were by H channel
range supposed that is skin or non-skin.
Then, a 3x3 median filter was used to soften more the results achieved by the dilatation and erosion,
because these techniques adulterated regions in contours.
Figure 26. Skin detection scheme using HSV
Finally, only skin regions are represented as white pixels. This result is shown in the following figure.
33
Figure 27. Skin after morphological operations and filtering
2.4.4 RGB Color Space Overview
There are several ways to specify colors. The most common of these is the RGB color model. The RGB
model defines a color by giving the intensity level of red, green and blue light that mix together to cre-
ate a pixel on the display. With most of today's displays, the intensity of each color can vary from 0 to
255, which gives 16,777,216 different colors. RGB color space is the most commonly used color space
in digital images.
The RGB color model is an additive color modeling which red, green, and blue light are added together
in various ways to reproduce a broad array of colors. The name of the model comes from the initials of
the three additive primarycolors, red, green, and blue.
It encodes colors as an additive combination of three primary colors: red(R), green (G) and blue (B).
RGB Color space is often visualized as a 3D cube where R, G and B are the three perpendicular axes.
One main advantage of the RGB space is its simplicity. However, it is not perceptually uniform, which
means distances in the RGB space do not linearly correspond to human perception. In addition, RGB
color space does not separate luminance and chrominance, and the R,G, and B components are highly
correlated. The luminance of a given RGB pixel is a linear combination of the R, G, and B values.
Therefore, changing the luminance of a given skin patch affects all the R, G, and B components. In oth-
er words, the location of a given skin patch in the RGB color cube will change based on the intensity of
the illumination under which such patch was imaged! This results in a very stretched skin color cluster
in the RGB color cube. RGB is extensively used in skin detection literature because of its simplicity.
The main purpose of the RGB color model is for the sensing, representation, and display of images in
electronic systems, such as televisions and computers, though it has also been used in conventional
photography. The RGB color model already had a solid theory behind it, based in human perception of
colors.
RGB is a device-dependent color model: different devices detect or reproduce a given RGB value dif-
ferently, since the color elements and their response to the individual R, G, and B levels vary from
manufacturer to manufacturer, or even in the same device over time.
To form a color with RGB, three light beams (one red, one green, and one blue) must be combined.
Each of the three beams is called a component of that color, and each of them can have an arbitrary in-
tensity, from fully off to fully on, in the mixture. The RGB color model is additive in the sense that the
three light beams are added together, and their light spectra add, wavelength for wavelength, to make
the final color's spectrum. Zero intensity for each component gives the darkest color (no light, consid-
ered the black), and full intensity of each gives a white.
34
Figure 28. RGB Color Model
2.4.5 Skin Detection using RGB [4][5]
To detect skin color using RGB, 3 main steps followed:
1. Data preparation.
2. Skin color classifiers modeling.
3. Testing and evaluation.
Data preparation
This step involves collecting a large number of human skin images from different databases such as
Compaq dataset, Sigal dataset, Testing dataset for skin detection (TDSD), and db-skin dataset.
Image Segmentation
Image segmentation is the process of dividing an image into multiple parts. This is typically used to
identify objects or other relevant information in digital images.
An accurate skin segmentation analysis is considered important in order to have images with the exact
ground truth information and to get optimum result in skin detection experiment. Each of the test imag-
es was segmented manually using Adobe Photoshop software.
The regions of skin pixels were selected using the Magic Wand tool, which is available in Adobe Pho-
toshop software. This tool enable user to select a consistently colored area without having to trace its
outline. This tool also allows user to interactively segment regions of skin by clicking the area
needed. If contiguous area is selected, all adjacent pixels within the tolerance range within the color re-
gion will be selected. The tolerance range defines on how similar in color of a pixel within the region
must be filled. Its value can be adjusted accordingly based on skin image, while regions of skin with
complex shape can be segmented quickly. If the region of skin and non-skin are too difficult to seg-
ment because of almost skin and non-skin pixels are similar color, then manual segmentation of
skin and non-skin area using pen tracing tool is employed. By using this tool, the user needs to trace
skin and non-skin area, manually.
The following figure illustrates the skin and non-skin annotation to obtain ground truth skin and
non-skin information.
35
Figure 29. An annotation process for skin and non-skin ground truth information
This process has to be done carefully to exclude the eyes, hair, mouth opening, eyebrows,
moustache and other materials covered on skin area. The RGB value of skin and non-skin areas were
mapped to [255 255 255] and [0 0 0], respectively.
Data Transformation
Before skin and non-skin pixels were used for experiments, each pixel of skin and non-skin
portion were transformed into 2-dimensional matrix.
Figure 30. Transformation of RGB from 3D into 2D matrix
Skin Color Modeling
Skin color distribution modeling is a third step after the choice of color model has been made and data
transformation in skin color detection algorithm development. A new technique called RGB ratio
model have been introduced. RGB Ratio is one of the explicitly defined skin region methods.
RGB ratio will be formulated by examine and observation from histogram and scatter plot.
36
Pixel is skin color pixel if the four rules in the following figure holds:
Figure 31. RGB skin color rules
This rule can be interpreted as the range of R value is from 96 to 255, the range of G value is from 41 to
239, and the range of B value is 21 to 254.
The histogram of ratio of difference between R and G over the sum of R and G, and the ratio of B over
sum of R and G are plotted from skin pixel of training dataset as shown in the following figure.
Figure 32. Histogram of (R-G)/(R+G) and B/(R+G) respectively
The new rule for skin color have been developed based on histogram as follows:
Figure 33. RGB skin color rules based on color histogram
Testing and Evaluation
The performance of skin color detection algorithm can be measured by two methods
1. Quantitative techniques
2. Qualitative techniques
The quantitative method consists of two techniques, i.e. Receiver Operating Characteristics (ROC)
and the true and false positive.
Qualitative technique is based on observe the ability of skin color classifier to classify skin and non-
skin pixels from images.
The true positive (TP) and false positive (FP) are statistical measures of the performance of a
binary classification test. Binary classification is the task of classifying the members of a given set of
objects into two groups on the basis of whether they have some property or not.
The TP is also called sensitivity, measures the proportion of actual positives, which are correctly identi-
37
fied as such.
Meanwhile, FP measures the proportion of actual negative which are incorrectly identified. The
FP rate is equal to the significance level. The specificity of the test is equal to one minus the FP rate (1
– FP). In case of skin color detection, the performance of skin color detection algorithm can be trans-
lated to following equation:
Figure 34. Examples of skin color classification
38
Chapter 3 - System Analysis
3.1 System Analysis Overview
Systems are created to solve problems. One can think of the systems approach as an organized way of
dealing with a problem. In this dynamic world, the subject System Analysis and Design (SAD), mainly
deals with the software development activities.
Next is to define a system, explain the different phases of system development life cycle, enumerate the
components of system analysis and explain the components of system designing.
3.1.1 What is System
A collection of components that work together to realize some objectives forms a system. Basically
there are three major components in every system, namely input, processing and output.
Figure 35. Basic System Components
In a system the different components are connected with each other and they are interdependent. For
example, human body represents a complete natural system. Many national systems such as political
system, economic system, educational system and so forth bound the process. The objective of the sys-
tem demands that some output is produced as a result of processing the suitable inputs. A well-designed
system also includes an additional element referred to as ‘control’ that provides a feedback to achieve
desired objectives of the system.
3.1.2 System Life Cycle
System life cycle is an organizational process of developing and maintaining systems. It helps in estab-
lishing a system project plan, because it gives overall list of processes and sub-processes required for
developing a system. System development life cycle means combination of various activities. In other
words it can be regarded that various activities put together are referred as system development life cy-
cle. In the System Analysis and Design terminology, the system development life cycle also means
software development life cycle.
Following are the different phases of system development life cycle:
 Preliminary study
 Feasibility study
 Detailed system study
 System analysis
 System design
 Coding
 Testing
 Implementation
 Maintenance
The next figure shows the different phases in the system development life cycle:
39
Figure 36. Phases of System Development Life Cycle
3.2 Phases of system development life cycle
Following is the description of the system development life cycle.
3.2.1 Preliminary System Study
Preliminary system study is the first stage of system development life cycle. This is a brief investigation
of the system under consideration and gives a clear picture of what actually the physical system is? In
practice, the initial system study involves the preparation of a ‘System Proposal’ which lists the Prob-
lem Definition, Objectives of the Study, Terms of reference for Study, Constraints, Expected benefits of
the new system, etc. in the light of the user requirements. The system proposal is prepared by the Sys-
tem Analyst (who studies the system) and places it before the user management. The management may
accept the proposal and the cycle proceeds to the next stage. The management may also reject
the proposal or request some modifications in the proposal. In summary, the system study phase pass-
es through the following steps:
 Problem identification and project initiation
 Background analysis
 Inference or findings (system proposal)
3.2.2 Feasibility Study
In case the system proposal is acceptable to the management, the next phase is to examine the feasibil-
ity of the system. The feasibility study is basically the test of the proposed system in the light of its
workability, meeting user’s requirements, effective use of resources and of course, the cost effective-
ness. These are categorized as technical, operational, economic and schedule feasibility. The main goal
of feasibility study is not to solve the problem but to achieve the scope. In the process of feasi-
bility study, the cost and benefits are estimated with greater accuracy to find the Return on Investment
(ROI). This also defines the resources needed to complete the detailed investigation. The result is a
feasibility report submitted to the management. This may be accepted or accepted with modifications
or rejected. The system cycle proceeds only if the management accepts it.
3.2.3 Detailed System Study
The detailed investigation of the system is carried out in accordance with the objectives of the proposed
system. This involves detailed study of various operations performed by a system and their relation-
ships within and outside the system. During this process, data are collected on the available files, deci-
40
sion points and transactions handled by the present system. Interviews, on-site observation and ques-
tionnaire are the tools used for detailed system study. Using the following steps it becomes easy to draw
the exact boundary of the new system under consideration:
 Keeping in view the problems and new requirements.
 Workout the pros and cons including new areas of the system.
All the data and the findings must be documented in the form of detailed data flow diagrams (DFDs),
data dictionary, logical data structures and miniature specification. The main points to be discussed in
this stage are:
 Specification of what the new system is to accomplish based on the user requirements.
 Functional hierarchy showing the functions to be performed by the new system and their re-
lationship with each other.
 Functional network, which are similar to function hierarchy but they highlight the func-
tions which are common to more than one procedure.
 List of attributes of the entities – these are the data items which need to be held about each
entity (record)
3.2.4 System Analysis
Systems analysis is a process of collecting factual data, understand the processes involved, identifying
problems and recommending feasible suggestions for improving the system functioning. This involves
studying the business processes, gathering operational data, understand the information flow, find-
ing out bottlenecks and evolving solutions for overcoming the weaknesses of the system so as to
achieve the organizational goals. System Analysis also includes subdividing of complex process involv-
ing the entire system, identification of data store and manual processes.
The major objectives of systems analysis are to find answers for each business process: What is being
done, How is it being done, Who is doing it, When is he doing it, Why is it being done and How can it
be improved? It is more of a thinking process and involves the creative skills of the System Analyst. It
attempts to give birth to a new efficient system that satisfies the current needs of the user and has scope
for future growth within the organizational constraints. The result of this process is a logical system de-
sign. Systems analysis is an iterative process that continues until a preferred and acceptable solution
emerges.
3.2.5 System Design
Based on the user requirements and the detailed analysis of the existing system, the new system
must be designed. This is the phase of system designing. It is the most crucial phase in the de-
velopments of a system. The logical system design arrived at as a result of systems analysis is converted
into physical system design. Normally, the design proceeds in two stages:
1. Preliminary or General Design
2. Structured or Detailed Design
3.2.5.1 Preliminary or General Design
In the preliminary or general design, the features of the new system are specified. The costs of imple-
menting these features and the benefits to be derived are estimated. If the project is still considered to
be feasible, next design stage will be taken into regard.
3.2.5.2 Structured or Detailed Design
In the detailed design stage, computer oriented work begins in earnest. At this stage, the design of the
41
system becomes more structured. Structure design is a blue print of a computer system solution to
a given problem having the same components and inter-relationships among the same components as
the original problem. Input, output, databases, forms, codification schemes and processing specifica-
tions are drawn up in detail. In the design stage, the programming language and the hardware and soft-
ware platform in which the new system will run are also decided.
There are several tools and techniques used for describing the system design of the system. These tools
and techniques are:
 Flowchart
 Data flow diagram (DFD)
 Data dictionary
 Structured English
 Decision table
 Decision tree
The system design involves:
i. Defining precisely the required system output
ii. Determining the data requirement for producing the output
iii. Determining the medium and format of files and databases
iv. Devising processing methods and use of software to produce output
v. Determine the methods of data capture and data input
vi. Designing Input forms
vii. Designing Codification Schemes
viii. Detailed manual procedures
ix. Documenting the Design
3.2.6 Coding
The system design needs to be implemented to make it a workable system. This demands the coding of
design into computer understandable language, i.e., programming language. This is also called the pro-
gramming phase in which the programmer converts the program specifications into computer instruc-
tions. It is an important stage where the defined procedures are transformed into control specifications
by the help of a computer language. The programs coordinate the data movements and control the entire
process in a system. It is generally felt that the programs must be modular in nature. This helps in fast
development, maintenance and future changes, if required.
3.2.7 Testing
Before actually implementing the new system into operation, a test run of the system is done for
removing the bugs, if any. It is an important phase of a successful system. After codifying the
whole programs of the system, a test plan should be developed and run on a given set of test data. The
output of the test run should match the expected results. Sometimes, system testing is considered a part
of implementation process.
Using the test data following test run are carried out:
1. Program test
2. System test
3.2.7.1 Program test
When the programs have been coded, compiled and brought to working conditions, they must be indi-
vidually tested with the prepared test data. Any undesirable happening must be noted and debugged (er-
42
ror corrections)
3.2.7.2 System Test
After carrying out the program test for each of the programs of the system and errors removed, then
system test is done. At this stage the test is done on actual data. The complete system is executed on
the actual data. At each stage of the execution, the results or output of the system is analyzed.
During the result analysis, it may be found that the outputs are not matching the expected output of the
system. In such case, the errors in the particular programs are identified and are fixed and further tested
for the expected output.
When it is ensured that the system is running error-free, the users are called with their own actual data
so that the system could be shown running as per their requirements.
3.2.8 Implementation
After having the user acceptance of the new system developed, the implementation phase begins. Im-
plementation is the stage of a project during which theory is turned into practice. The major steps
involved in this phase are:
 Acquisition and Installation of Hardware and Software
 Conversion
 User Training
 Documentation
The hardware and the relevant software required for running the system must be made fully opera-
tional before implementation. The conversion is also one of the most critical and expensive activities in
the system development life cycle. The data from the old system needs to be converted to operate
in the new format of the new system. The database needs to be setup with security and recovery proce-
dures fully defined.
During this phase, all the programs of the system are loaded onto the user’s computer. After loading the
system, training of the user starts. Main topics of such type of training are:
 How to execute the package
 How to enter the data
 How to process the data (processing details)
 How to take out the reports
3.2.8.1 Changeover
After the users are trained about the computerized system, working has to shift from manual to comput-
erized working. The process is called ‘Changeover’. The following strategies are followed for
changeover of the system.
3.2.8.1.1 Direct Changeover
This is the complete replacement of the old system by the new system. It is a risky approach and re-
quires comprehensive system testing and training.
3.2.8.1.2 Parallel run
In parallel run both the systems, i.e., computerized and manual, are executed simultaneously for cer-
tain defined period. The same data is processed by both the systems. This strategy is less risky but
more expensive because of the following:
 Manual results can be compared with the results of the computerized system.
43
 The operational work is doubled.
 Failure of the computerized system at the early stage does not affect the working of the
organization, because the manual system continues to work, as it used to do.
3.2.8.1.3 Pilot run
In this type of run, the new system is run with the data from one or more of the previous periods for the
whole or part of the system. The results are compared with the old system results. It is less expen-
sive and risky than parallel run approach. This strategy builds the confidence and the errors are traced
easily without affecting the operations.
3.2.9 Maintainence
Maintenance is necessary to eliminate errors in the system during its working life and to tune the sys-
tem to any variations in its working environments. It has been seen that there are always some errors
found in the systems that must be noted and corrected. It also means the review of the system from time
to time. The review of the system is done for:
 Knowing the full capabilities of the system
 Knowing the required changes or the additional requirements
 Studying the performance.
If a major change to a system is needed, a new project may have to be set up to carry out the change.
The new project will then proceed through all the above life cycle phases.
3.2.10 Documentation
The documentation of the system is also one of the most important activity in the system develop-
ment life cycle. This ensures the continuity of the system. There are generally two types of documen-
tation prepared for any system. These are:
1. User or Operator Documentation
2. System Documentation
3.2.10.1 User or Operator Documentation
The user documentation is a complete description of the system from the users point of view
detailing how to use or operate the system. It also includes the major error messages likely to be en-
countered by the users.
3.2.10.2 System Documentation
The system documentation contains the details of system design, programs, their coding, system
flow, data dictionary, process description, etc. This helps to understand the system and permit changes
to be made in the existing system to satisfy new user needs.
44
3.3 DFD
3.3.1 Context Diagram
The context diagram is the highest level in a data flow diagram and contains only one process, repre-
senting the entire system. The process is given the number zero. All external entities are shown on the
context diagram, as well as major data flow to and from them. The diagram does not contain any data
stores and is fairly simple to create, once the external entities and the data flow to and from them are
known to analysts.
Figure 37. DFD Context Diagram
As shown in the context diagram, the system will interact with the mobile or desktop devices by re-
questing a face image for a user. Then the devices will send the requested image back to the user by
capturing an image using their cameras.
Face login system has the responsibility to make all the processing required to detect faces and follow
all steps presented in the system overview.
The devices will provide the system with information about their location and gives alert in case the
captured image was of a malicious user.
Also the system interact with the user by taking images from it by different means such as uploading an
45
image and providing information about it such as their timing and location.
There is also a website that accepts different user uploads that was manually created and also images
captured automatically from the system.
Website is connected to a large database hat stores all of these images and allow normal Internet users
to search about malicious users from its collection.
3.3.2 Level 0 Diagram
More detail than the context diagram permits is achievable by “exploding the diagrams.” Inputs and
outputs specified in the first diagram remain constant in all subsequent diagrams. The rest of the origi-
nal diagram, however, is exploded into close-ups involving three to nine processes and showing data
stores and new lower-level data flows. The effect is that of taking a magnifying glass to view the origi-
nal data flow diagram. Each exploded diagram should use only a single sheet of paper. By exploding
DFDs into subprocesses, the systems analyst begins to fill in the details about data movement. The han-
dling of exceptions is ignored for the first two or three levels of data flow diagraming.
Diagram 0 is the explosion of the context diagram and may include up to nine processes. Including
more processes at this level will result in a cluttered diagram that is difficult to understand. Each pro-
cess is numbered with an integer, generally starting from the upper left-hand corner of the diagram and
working toward the lower right-hand corner. The major data stores of the system (representing master
files) and all external entities are included on Diagram 0.
Figure 38. DFD Level 0 Diagram
In the DFD level 0 diagram shown, the mobile application will capture images from the client that pass
in the field of the camera. Then the system will detect their faces.
The application has the ability to store the captured images locally into their private database and also
46
can send their images to the system that can be distributed across a large scale to allow different users
to access it.
In the website, it is connected to a database in which all users uploads and captured images are stored in
it.
3.4 Entity Relationship Diagram (ERD)
An entity-relationship diagram (ERD) is a graphical representation of an information system that shows
the relationship between people, objects, places, concepts or events within that system. An ERD is a
data modeling technique that can help define business processes and can be used as the foundation for
relational database.
Figure 39. ERD Diagram
47
3.5 UML
3.5.1 Use Case Diagram
Use case diagrams are considered for high level requirement analysis of a system. So when the re-
quirements of a system are analyzed the functionalities are captured in use cases. This diagram is a
graphic depiction of the interactions among the elements of a system. A use case is a methodology used
in system analysis to identify, clarify, and organize system requirements. Use case diagrams are drawn
to capture the functional requirements of a system.
Figure 40. UML Use Case Diagram
As shown in this figure, the system provides a number of functions such as
 Captruing images
 Storing images in database
 Processing images
 Find information about users and others
The client can interact with the system by different means. It can be the captured face that the system
48
will process or it can be the provider of a different face image to be stored in the system database. It al-
so provide information about uploaded images.
The client can also search the database of the system for specific images that is distributed across the
website.
Camera can capture images and provide information about its location.
3.5.2 Class Diagram
The class diagram is a static diagram. It represents the static view of an application. Class diagram is
not only used for visualizing, describing and documenting different aspects of a system but also for
constructing executable code of the software application.
The class diagram describes the attributes and operations of a class and also the constraints imposed on
the system. The class diagrams are widely used in the modeling of object oriented systems because they
are the only UML diagrams which can be mapped directly with object oriented languages.
The class diagram shows a collection of classes, interfaces, associations, collaborations and constraints.
It is also known as a structural diagram.
The purpose of the class diagram is to model the static view of an application. The class diagrams are
the only diagrams which can be directly mapped with object oriented languages and thus widely used at
the time of construction.
The UML diagrams like activity diagram, sequence diagram can only give the sequence flow of the ap-
plication but class diagram is a bit different. So it is the most popular UML diagram in the coder com-
munity.
49
Figure 41. UML Class Diagram
3.5.3 Sequence Diagram
UML sequence diagrams are used to show how objects interact in a given situation. An important char-
acteristic of a sequence diagram is that time passes from top to bottom : the interaction starts near the
top of the diagram and ends at the bottom (i.e. Lower equals Later).
A popular use for them is to document the dynamics in an object-oriented system. For each key collabo-
ration, diagrams are created that show how objects interact in various representative scenarios for that
collaboration.
The sequence diagram is used primarily to show the interactions between objects in the sequential order
that those interactions occur. Much like the class diagram, developers typically think sequence diagrams
were meant exclusively for them. However, an organization's business staff can find sequence diagrams
useful to communicate how the business currently works by showing how various business objects in-
teract. Besides documenting an organization's current affairs, a business-level sequence diagram can be
used as a requirements document to communicate requirements for a future system implementation.
During the requirements phase of a project, analysts can take use cases to the next level by providing a
more formal level of refinement. When that occurs, use cases are often refined into one or more se-
quence diagrams.
50
Figure 42. UML Sequence Diagram
In the sequence diagram presented, the camera will capture images fro the users. The camera is always
alive to capture users face images` at any time.
Once the camera captured an image, it will be sent to the system for processing that follows all steps in
the system overview.
At first there is a preprocessing step using Retinex algorithm that adjust image brightness. Then skin
pixels are detected from the images and based on skin face regions are extracted.
Then if there is any region that is likely to be a face is detected, it will be sent to the feature extraction
step for further processing. If there are features in the regions, then further processing by enhancing de-
tected features and measuring distances will take place.
If no feature extracted, then it will go back to select another region.
3.6 System Overview
A simple diagram showing the steps followed when accepting a new image to the end of the system is
shown in the following figure:
51
Figure 43. System Flow Diagram
At first an image contains a face is considered, then some enhancements to it is done by simple prepro-
cessing mechanisms.
Because the aim is to detect skin and other regions in which any color effect will affect results, making
sure that colors will be relatively constant under varying illumination conditions is important. One of
the preprocessing techniques used is the Retinex algorithm [5].
Retinex algorithm was originally proposed by Land and McCann in 1971 which is one of the algo-
rithms that can make enhancements to images that suffer from poor lightning and changing illumination
conditions.
Mainly, it consists of two major steps: estimation and normalization of illumination. It overcomes dis-
crepancy between what physically natural human eye see and what camera collect under certain condi-
tion changes as what happens with retina of human eyeball, so it is called Retinex.
Retinex ,which belongs to the center surround class, or any color constancy algorithms modify each in-
put RGB pixel value where the output value is determined by the input which is the center and its sur-
rounding neighbors to reach the most sharp color of pixels aiming to remove the effect of colors, noise,
nearby objects, contrast, illumination changes, or any other effects.
Center is defined as each RGB pixel value and surround is a Gaussian function. Illumination can be
eliminated by using some Gaussian masks to smooth the original image.
Compared to histogram equalization (HE), it gives more better results. This ensures finding most of
skin pixels if already exist in many cases. One of these cases is loss of data that heavily change skin
color that can be caused by dividing the image by a constant.
The following figure shows an example after applying this algorithm. After dividing the image at first
by 5, 10, and 15 as shown in (b), then applying the algorithm over it the result is shown in (c).
52
(a)
b(1) c(1)
b(2) c(2)
b(3) c(3)
Figure 44. Retinex Algorithm Results. Original RGB image (a), image after division by 5, 10,
and 15 (b), and result of Retinex algorithm (c)
From results, Retinex algorithm proves that it works well under loss of data which has the ability to re-
store data.
After that, skin detector is applied. This detector is regarded as classifier that classifies all pixels into
two categories :
1. Skin
2. Not skin
So it set all skin like pixels to 1 and all other pixels to 0 and results in a binary image as shown in the
next figure.
(a) (b) (c)
Figure 45. Skin Detection Example. Original RGB image (a) , detected skin (b), and skin after
morphological operations (c)
Glasses, brows and others can affect result of face detection after successfully detecting skin as shown
in (b).To sharpen results and remove small objects that cannot be a face compared to others that affect
53
the result, morphological operations are used for this task. This is shown in (c). Basic two operations
are erosion and dilation. Erosion is used to cut off boundaries of objects of foreground in binary image
so areas of background will get smaller and shrink in size and holes within these objects get larger.
Dilation is the reverse of erosion. Other two operations based on dilation and erosion are opening and
closing. Finally a combination of these morphological operations are used.
Other effects such as regarding neck or shoulders as part of the face will be eliminated by applying the
face feature detector. This detector will check components of face region such as nose and mouth, and
any object does not contain these components will be eliminated.
After detection these components, a measurement of distances between them is done to create a feature
vector. This is the result of the system used to be compared with other vector from other faces to check
if the 2 faces are identical or not to be part of a recognition system.
3.7 System Phases
This section will carry out the description of the major phases involved in the system development.
3.7.1 Image Capture
In this step, an image is captured that contains a face using a simple camera. The image format can be
any of the available formats such as:
 Joint Photographic Experts Group (JPEG)
 Portable Network Graphics (PNG)
 Tagged Image File Format (TIFF)
 Boyevaya Mashina Pekhoty (BMP)
 Portable Pixmap (PPM)
 Portable Graymap (PGM)
 Portable Bitmap (PBM)
Examples of images that the system can work with is shown in the next figure:
Figure 46. Examples of input image to the system
This captured image is the input to the system that will be further processed.
3.7.2 Image Preprocessing
Image preprocessing, also called image restoration, can significantly increase the reliability of an opti-
cal inspection. Several filter operations which intensify or reduce certain image details enable an easier
or faster evaluation. It involves the correction of distortion, degradation, and noise introduced during
54
the imaging process. Preprocessing images commonly involves removing low-frequency background
noise, normalizing the intensity of the individual particles images, removing reflections, and masking
portions of images. Image preprocessing is the technique of enhancing data images prior to computa-
tional processing.
Four categories of image preprocessing methods according to the size of the pixel neighborhood that is
used for the calculation of a new pixel brightness:
1. pixel brightness transformations
2. geometric transformations,
3. preprocessing methods that use a local neighborhood of the processed pixel
4. image restoration that requires knowledge about the entire image.
Image preprocessing methods use the considerable redundancy in images. Neighboring pixels corre-
sponding to one object in real images have essentially the same or similar brightness value. Thus, dis-
torted pixel can often be restored as an average value of neighboring pixels.
If reprocessing aims to correct some degradation in the image, the nature of a priori information is im-
portant. Knowledge about the nature of the degradation; only very general properties of the degradation
are assumed. Knowledge about the properties of the image acquisition device, and conditions under
which the image was obtained. The nature of noise (usually its spectral characteristics) is sometimes
known. Knowledge about objects that are searched for in the image, which may simplify the prepro-
cessing very considerably. If knowledge about objects is not available in advance it can be estimated
during the processing.
Illumination variation is the most annoying effect that needs to be eliminated from the input image.
Next figure shown an example of illumination effect.
Figure 47. Illumination effect
It has has enormously complex effects on the image of an object. In the image of a familiar face,
changing the direction of illumination leads to shifts in the location and shape of shadows, changes
in highlights, and reversal of contrast gradients. Yet every-day experience shows that humans are
remarkably good at recognizing faces despite such variations in lighting. Here it can be examined
how humans recognize faces, given image variations caused by changes in lighting direction and by
cast shadows. One issue is whether faces are represented in an illumination-invariant or illumination-
dependent manner. A second issue is whether cast shadows improve face recognition by providing in-
formation about surface shape and illumination direction, or hinder performance by introducing spuri-
ous edges that must be discounted prior to recognition. The influences of illumination direction and cast
shadows are examined using both short-term and long-term memory paradigms. Images of the same
55
face appear differently due to the change in lighting. If the change induced by illumination is larger than
the difference between individuals, systems would not be able to recognize the input image.
There are many ways that preprocessing can be applied:
 Normalization
 Filters
 Soft focus, selective focus
 User-specific filter
 Static/dynamic binarisation
 Image plane separation
 Binning
One of the techniques used is histogram equalization. Histogram equalization is a technique for adjust-
ing image intensities to enhance contrast. It is not necessary that contrast will always be increase in this.
There may be some cases were histogram equalization can be worse. In that cases the contrast is de-
creased.
It provide a sophisticated method for modifying the dynamic range and contrast of an image by altering
that image such that its intensity histogram has a desired shape. Unlike contrast stretching, histogram
modeling operators may employ non-linear and non-monotonic transfer functions to map between pixel
intensity values in the input and output images. Histogram equalization employs a monotonic, non-
linear mapping which reassigns the intensity values of pixels in the input image such that the output
image contains a uniform distribution of intensities (i.e. a flat histogram). This technique is used in im-
age comparison processes (because it is effective in detail enhancement) and in the correction of non-
linear effects introduced by, say, a digitizer or display system.
Equalization implies mapping one distribution (the given histogram) to another distribution (a wider
and more uniform distribution of intensity values) so the intensity values are spread over the whole
range.
Example of how HE work is shown in the following figure.
Figure 48. Before and after applying histogram equalization over an image
56
Another algorithm that can be used is called RETINEX that has greater efficiency that histogram equal-
ization in action and time.
RETINEX: 're-tin-ex, 'ret-nex; noun; (pl) retinexes; from Medieval Latin retina and Latin cortic. Edwin
Land coined word for his model of human color vision, combining the retina of the eye and the cerebral
cortex of the brain. More specifically defined in image processing as a process that automatically pro-
vides visual realism to images.
It is one of the color constancy enhancement algorithms uses the Fast Fourier Transform. It has the abil-
ity to determine the colors of objects irrespective of the illumination conditions and of the nearby ob-
jects color.
This is an important characteristic of the Human Visual System (HVS). The HVS is able to compute
some descriptors which defines the object color independently of the present illumination in the scene
and independently of the surrounding objects color.
The goal of color constancy research is to achieve these descriptors, which means discounting the effect
of illumination and obtaining a canonical color appearance.
The basic Retinex model is based on the assumption that the HVS operates with three retinal-cortical
systems, each one processing independently the low, middle and high frequencies of the visible elec-
tromagnetic spectrum.
Each system produces one lightness value which determines, by superposition, the perception of color
in the HVS. On digital RGB images, the lightness is represented by the triplet (Lr , Lg , Lb ) of light-
ness values in the three chromatic channels.
Edges are the main source of information to achieve color constancy. Moreover, they realized that the
procedure of taking the ratio between two adjacent points can both detect an edge and eliminate the ef-
fect of non-uniform illumination.
Example of the results obtained after applying Retinex algorithm over a destroyed color image is shown
in next figure.
Figure 49. Applying RETINEX over a degraded color images
57
3.7.3 Skin and Face Detection
Skin detection is the process of finding skin-colored pixels and regions in an image or a video. This
process is typically used as a preprocessing step to find regions that potentially have human faces and
limbs in images. Several computer vision approaches have been developed for skin detection. A skin
detector typically transforms a given pixel into an appropriate color space and then use a skin classifier
to label the pixel whether it is a skin or a non-skin pixel. A skin classifier defines a decision boundary of
the skin color class in the color space based on a training database of skin-colored pixels.
Detecting skin-colored pixels, although seems a straightforward easy task, has proven quite challenging
for many reasons. The appearance of skin in an image depends on the illumination conditions (illumina-
tion geometry and color) where the image was captured. Humans are very good at identifying object
colors in a wide range of illuminations, this is called color constancy. Color constancy is a mystery of
perception.
Therefore, an important challenge in skin detection is to represent the color in a way that is invariant or
at least insensitive to changes in illumination. This is why Retinex is used.
The choice of the color space affects greatly the performance of any skin detector and its sensitivity to
change in illumination conditions.
Another challenge comes from the fact that many objects in the real world might have skin-tone colors.
For example, wood, leather, skin-colored clothing, hair, sand, etc. This causes any skin detector to have
many false detections in the background if the environment is not controlled.
In any given color space, skin color occupies a part of such a space, which might be a compact or large
region in the space. Such region is usually called the skin color cluster. A skin classifier is a one-class or
two-class classification problem. A given pixel is classified and labeled whether it is a skin or a non-
skin given a model of the skin color cluster in a given color space. In the context of skin classification,
true positives are skin pixels that the classifier correctly labels as skin. True negatives are non-skin pix-
els that the classifier correctly labels as non-skin. Any classifier makes errors: it can wrongly label a
non-skin pixel as skin or a skin pixel as a non-skin. The former type of errors is referred to as false pos-
itives (false detections) while the later is false negatives. A good classifier should have low false posi-
tive and false negative rates. As in any classification problem, there is a trade off between false posi-
tives and false negatives. The more loose the class boundary, the less the false negatives and the more
the false positives. The tighter the class boundary, the more the false negatives and the less the false
positives. The same applies to skin detection. This makes the choice of the color space extremely im-
portant in skin detection. The color needs to be represented in a color space where the skin class is most
compact in order to be able to tightly model the skin class. The choice of the color space directly affects
the kind of classifier that should be used.
3.7.3.1 Color Space Selection
The human skin color has a restricted range of hues and is not deeply saturated, since the appearance of
skin is formed by a combination of blood (red) and melanin (brown, yellow). Therefore, the human skin
color does not fall randomly in a given color space, but clustered at a small area in the color space. But
it is not the same for all the color spaces.
Next figure shows density plots for skin-colored pixels obtained from images of different Asian people
plotted in different color spaces. It is found that the same skin color is located differently in different
color spaces.
58
Figure 50. Density plots of Asian skin in different color spaces
Also next figure shows density plots for skin-colored pixels from different people from different
races: Asian, African and Caucasian plotted in different color spaces.
Figure 51. Density plots of Asian, African and Caucasian skin in different color spaces
Variety of color spaces have been used in skin detection literature with the aim of finding a color space
where the skin color is invariant to illumination conditions. The choice of the color spaces affects the
59
shape of the skin class, which affects the detection process.
Some color spaces have their luminance component separated from the chromatic component,
and they are known to possess higher discriminality between skin pixels and non-skin pixels
over various illumination conditions. Skin color models that operate only on chrominance subspac-
es such as the Cb-Cr, and H-S have been found to be effective in characterizing various human
skin colors. Skin classification can be accomplished by explicitly modeling the skin distribution
on certain color spaces using parametric decision rules. Some researchers made a set of rules to de-
scribe skin cluster in RGB space while others used a set of bounding rules to classify skin regions on
both YCbCr and HSV spaces.
A variety of classification techniques have been used in the literature for the task of skin classification.
A skin classifier is a one-class classifier that defines a decision boundary of the skin color class in a fea-
ture space. The feature space in the context of skin detection is simply the color space chosen. Any pix-
el which color falls inside the skin color class boundary is labeled as skin. Therefore, the choice of the
skin classifier is directly induced by the shape of the skin class in the color space chosen by a skin de-
tector. The more compact and regularly shaped the skin color class, the more simple the classifier.
To enable greater flexibility in the detection of the skin color, not only one color space is used but a
combination of color spaces. That is RGB, HSV, and YCbCr color spaces.
3.7.3.2 RGB-H-CbCr Color Space [6]
While the RGB, HSV and YUV (YCbCr) are standard models used in various color imaging
applications, not all of their information are necessary to classify skin color.
This model utilizes the additional hue and chrominance information of the image on top of stand-
ard RGB properties to improve the discriminality between skin pixels and non-skin pixels.
Skin regions are classified using the RGB boundary rules introduced by Peer et al. in [7] and also ad-
ditional new rules for the H and CbCr subspaces. These rules are constructed based on the skin color
distribution obtained from the training images. The classification of the extracted regions is fur-
ther refined using a parallel combination of morphological operations.
The next figure shows the steps used from detecting skin color to detecting faces in the image.
Figure 52. System overview for face detecting using skin color
In this color-based approach to face detection, prior formulation of the proposed RGB-H-CbCr skin
model is done using a set of skin-cropped training images. Three commonly known color spaces –
RGB, HSV and YCbCr are used to construct the proposed hybrid model. Bounding planes or
60
rules for each skin color subspace are constructed from their respective skin color distributions.
In the first step of the detection stage, these bounding rules are used to segment the skin regions of
input test images. After that, a combination of morphological operations are applied to the ex-
tracted skin regions to eliminate possible non-face skin regions. Finally, the last step labels all the
face regions in the image and returns them as detected faces. In this system, there is a preprocessing
step that is applying Retinex algorithm.
3.7.3.3 Skin Color Subspace Analysis
In RGB space, the skin color region is not well distinguished in all 3 channels. A simple observa-
tion of its histogram will show that it is uniformly spread across a large spectrum of values.
In HSV space, the H (Hue) channel shows significant discrimination of skin color regions, as
observed from the H-V and H-S plots in the next figure.
Figure 53. H-V and H-S subspace plots
Figure 54. Distribution of the H (Hue) channel
Both plots exhibited very similar distribution of pixels.
In the hue channel shown in next figure, most of the skin color samples are concentrated around
values between 0 and 0.1 and between 0.9 and 1.0 (in a normalized scale of 0 to 1).
Some studies have indicated that pixels belonging to skin regions possess similar chrominance (Cb and
Cr) values. These values have also been shown to provide good coverage of all human races. The Cb-
Cr subspace offers the best discrimination between skin and non-skin regions. Next figure shows
the compact distribution of the chrominance values (Cb and Cr) in comparison with the luminance
value (Y). It is also observed that the varying intensity values of Y (Luminance) does not alter the
skin color distribution in the Cb-Cr subspace. The luminance property merely characterizes the
brightness of a particular chrominance value.
61
Figure 55. Distribution of Y, Cb and Cr respectively
3.7.3.4 Skin Color Bounding Rules
From the skin color subspace analysis, a set of bounding rules is derived from all three color spac-
es, RGB, YCbCr and HSV.
All rules are derived for intensity values between 0 and 255. In RGB space, skin color rules intro-
duced by Peer et al. [8] can be used. The skin color at uniform daylight illumination rule is de-
fined as:
while the skin color under flashlight or daylight lateral illumination rule is given by
Both rules are combine by a logical OR to enable detecting of both daylight and night skin colors.
Based on the observation that the Cb-Cr subspace is a strong discriminant of skin color. The five
bounding rules that enclosed the Cb-Cr skin color region are formulated as below:
In the HSV space, the hue values exhibit the most noticeable separation between skin and non-skin
regions. Two cut off levels are estimated as our H subspace skin boundaries,
62
Rules for RGB, YCbCr and HSV are combined by a logical AND to detect skin. This creates the range
of skin in the combined color spaces.
3.7.3.5 Morphological Operations
Up to this step, skin was detected efficiently. The next step of the face detection system involves the use
of morphological operations to refine the skin regions extracted.
The sub-regions can be easily grouped together by applying simple dilation on the large regions.
Hole and gaps within each region can also be closed by a flood fill operation. The problem of oc-
clusion often occurs in the detection of faces in large groups of people. Even faces of close proximi-
ty may result in the detection of one single region due to the nature of pixel-based methods. Hence,
morphological opening is used to “open up” or pull apart narrow, connected regions.
Next figure shows an example:
Figure 56. Detected skin after morphological operations
3.7.3.6 Skin Detection Results
This section is regarded a comparative section between the listed color spaces to compare their accura-
cy in detecting the skin color.
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services
Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services

More Related Content

What's hot

Line drawing algo.
Line drawing algo.Line drawing algo.
Line drawing algo.
Mohd Arif
 
Student result mamagement
Student result mamagementStudent result mamagement
Student result mamagement
Mickey
 

What's hot (20)

Project report
Project reportProject report
Project report
 
Line drawing algo.
Line drawing algo.Line drawing algo.
Line drawing algo.
 
Curves and surfaces
Curves and surfacesCurves and surfaces
Curves and surfaces
 
Software quality
Software qualitySoftware quality
Software quality
 
Artificial Intelligence Game Search by Examples
Artificial Intelligence Game Search by ExamplesArtificial Intelligence Game Search by Examples
Artificial Intelligence Game Search by Examples
 
Introduction to Software Project Management
Introduction to Software Project ManagementIntroduction to Software Project Management
Introduction to Software Project Management
 
Cat and dog classification
Cat and dog classificationCat and dog classification
Cat and dog classification
 
Online course registration system development software engineering project pr...
Online course registration system development software engineering project pr...Online course registration system development software engineering project pr...
Online course registration system development software engineering project pr...
 
Bresenham line-drawing-algorithm By S L Sonawane.pdf
Bresenham line-drawing-algorithm By S L Sonawane.pdfBresenham line-drawing-algorithm By S L Sonawane.pdf
Bresenham line-drawing-algorithm By S L Sonawane.pdf
 
Unified process Model
Unified process ModelUnified process Model
Unified process Model
 
HUMAN EMOTION RECOGNIITION SYSTEM
HUMAN EMOTION RECOGNIITION SYSTEMHUMAN EMOTION RECOGNIITION SYSTEM
HUMAN EMOTION RECOGNIITION SYSTEM
 
Android Based Application Project Report.
Android Based Application Project Report. Android Based Application Project Report.
Android Based Application Project Report.
 
Stock Market Prediction using Machine Learning
Stock Market Prediction using Machine LearningStock Market Prediction using Machine Learning
Stock Market Prediction using Machine Learning
 
CSE Final Year Project Presentation on Android Application
CSE Final Year Project Presentation on Android ApplicationCSE Final Year Project Presentation on Android Application
CSE Final Year Project Presentation on Android Application
 
Student result mamagement
Student result mamagementStudent result mamagement
Student result mamagement
 
Uml Common Mechanism
Uml Common MechanismUml Common Mechanism
Uml Common Mechanism
 
Reflection transformation
Reflection transformationReflection transformation
Reflection transformation
 
07 regularization
07 regularization07 regularization
07 regularization
 
Introduction to Object recognition
Introduction to Object recognitionIntroduction to Object recognition
Introduction to Object recognition
 
Mini Project PPT
Mini Project PPTMini Project PPT
Mini Project PPT
 

Similar to Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services

Similar to Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services (20)

IRJET- Smart Classroom Attendance System: Survey
IRJET- Smart Classroom Attendance System: SurveyIRJET- Smart Classroom Attendance System: Survey
IRJET- Smart Classroom Attendance System: Survey
 
IRJET- Embedded System for Automatic Door Access using Face Recognition Te...
IRJET- 	  Embedded System for Automatic Door Access using Face Recognition Te...IRJET- 	  Embedded System for Automatic Door Access using Face Recognition Te...
IRJET- Embedded System for Automatic Door Access using Face Recognition Te...
 
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
IRJET- Implementation of Gender Detection with Notice Board using Raspberry PiIRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
 
IRJET- Multiple Feature Fusion for Facial Expression Recognition in Video: Su...
IRJET- Multiple Feature Fusion for Facial Expression Recognition in Video: Su...IRJET- Multiple Feature Fusion for Facial Expression Recognition in Video: Su...
IRJET- Multiple Feature Fusion for Facial Expression Recognition in Video: Su...
 
IRJET- Analysis of Face Recognition using Docface+ Selfie Matching
IRJET-  	  Analysis of Face Recognition using Docface+ Selfie MatchingIRJET-  	  Analysis of Face Recognition using Docface+ Selfie Matching
IRJET- Analysis of Face Recognition using Docface+ Selfie Matching
 
Automated attendance system using Face recognition
Automated attendance system using Face recognitionAutomated attendance system using Face recognition
Automated attendance system using Face recognition
 
IRJET- Smart Surveillance Cam using Face Recongition Alogrithm
IRJET-  	  Smart Surveillance Cam using Face Recongition AlogrithmIRJET-  	  Smart Surveillance Cam using Face Recongition Alogrithm
IRJET- Smart Surveillance Cam using Face Recongition Alogrithm
 
Person Acquisition and Identification Tool
Person Acquisition and Identification ToolPerson Acquisition and Identification Tool
Person Acquisition and Identification Tool
 
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITIONA VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
 
Virtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial RecognitionVirtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial Recognition
 
Deep hypersphere embedding for real-time face recognition
Deep hypersphere embedding for real-time face recognitionDeep hypersphere embedding for real-time face recognition
Deep hypersphere embedding for real-time face recognition
 
IRJET- Computerized Attendance System using Face Recognition
IRJET- Computerized Attendance System using Face RecognitionIRJET- Computerized Attendance System using Face Recognition
IRJET- Computerized Attendance System using Face Recognition
 
IRJET- Computerized Attendance System using Face Recognition
IRJET- Computerized Attendance System using Face RecognitionIRJET- Computerized Attendance System using Face Recognition
IRJET- Computerized Attendance System using Face Recognition
 
A Real Time Advance Automated Attendance System using Face-Net Algorithm
A Real Time Advance Automated Attendance System using Face-Net AlgorithmA Real Time Advance Automated Attendance System using Face-Net Algorithm
A Real Time Advance Automated Attendance System using Face-Net Algorithm
 
FACIAL EMOTION RECOGNITION SYSTEM
FACIAL EMOTION RECOGNITION SYSTEMFACIAL EMOTION RECOGNITION SYSTEM
FACIAL EMOTION RECOGNITION SYSTEM
 
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...Emotion Detection Using Facial Expression Recognition to Assist the Visually ...
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
 
DRIVER DROWSINESS DETECTION SYSTEM
DRIVER DROWSINESS DETECTION SYSTEMDRIVER DROWSINESS DETECTION SYSTEM
DRIVER DROWSINESS DETECTION SYSTEM
 
IRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCVIRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCV
 
Smart Doorbell System Based on Face Recognition
Smart Doorbell System Based on Face RecognitionSmart Doorbell System Based on Face Recognition
Smart Doorbell System Based on Face Recognition
 

More from Ahmed Gad

Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Ahmed Gad
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Ahmed Gad
 
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression AnalysisICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
Ahmed Gad
 
Brief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNsBrief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNs
Ahmed Gad
 
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Ahmed Gad
 

More from Ahmed Gad (20)

ICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic AlgorithmICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic Algorithm
 
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
 
Python for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd EditionPython for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd Edition
 
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
 
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded ScenesM.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
 
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
 
Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
 
Avoid Overfitting with Regularization
Avoid Overfitting with RegularizationAvoid Overfitting with Regularization
Avoid Overfitting with Regularization
 
Genetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step ExampleGenetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step Example
 
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression AnalysisICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
 
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-StepBackpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
 
Computer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and GradientComputer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and Gradient
 
Python for Computer Vision - Revision
Python for Computer Vision - RevisionPython for Computer Vision - Revision
Python for Computer Vision - Revision
 
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia CourseAnime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
 
Brief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNsBrief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNs
 
Operations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by ExampleOperations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by Example
 
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingMATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
 
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
 
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
 

Recently uploaded

Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 

Recently uploaded (20)

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the Classroom
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 

Graduation Project - Face Login : A Robust Face Identification System for Security-Based Services

  • 1. 1 Menoufia University Faculty of Computers and Information Information Technology Department FACE LOGIN : A ROBUST FACE IDENTIFICATION SYSTEM FOR SECURITY-BASED SERVICE BY Ahmed Fawzy Gad SUPERVISOR Dr. Noura Abd El-Moez Semary July 2015 ‫المنوفية‬ ‫جامعة‬
  • 2. 2 Table Of Content (TOC) Acknowledgment--------------------------------------------------------------------------------------------------5 Dedication----------------------------------------------------------------------------------------------------------6 Abstract-------------------------------------------------------------------------------------------------------------7 Sybmols and Abbreviations--------------------------------------------------------------------------------------8 List Of Tables------------------------------------------------------------------------------------------------------9 List Of Figures-----------------------------------------------------------------------------------------------------10 Chapter 1 - Introduction-----------------------------------------------------------------------13 1.1. Overview about face recogniton systems 1.2. Face Recognition Applications 1.3. Face Recognition Stages 1.3.1. Face Identification 1.3.2. Face Authentication 1.4. Face Recognition Major Classes 1.4.1. Holistic Face Recognition 1.4.2. Local Features Face Recogntion Chapter 2 - Background / Related Work---------------------------------------------------22 2.1.YCbCr Color Space Overview 2.2.Face Component Extraction Using YCbCr 2.2.1. Face Skin Model Detection 2.2.2. Face Cropping Process on Normal Static Image 2.2.3. Extraction Process and Measurement of Distances between Face Compo- nents 2.2.4. Face Division 2.2.5. Face Component Detection and Extraction 2.3.Face Recognition Based On Eigenfaces 2.3.1. The eigenface and the second-order eigenface method 2.3.2. PCA mixture model and EM learning 2.3.3. The second-order mixture-of-eigenfaces method 2.4.HSV Color Space Overview 2.5.Skin Detection Using HSV 2.6.RGB Color Space Overview 2.7.Skin Detection Using RGB Chapter 3 - System Analysis------------------------------------------------------------------38 3.1.System Analysis Overview 3.1.1 What is System 3.1.2 System Life Cycle 3.2.Phases of system development life cycle 3.2.1 Preliminary System Study 3.2.2 Feasibility study 3.2.3 Detailed system study 3.2.4 System analysis
  • 3. 3 3.2.5 System design 3.2.5.1 Preliminary or General Design 3.2.5.2 Structured or Detailed Design 3.2.6 Coding 3.2.7 Testing 3.2.7.1 Program test 3.2.7.2 System test 3.2.8 Implementation 3.2.8.1 Changeover 3.2.8.1.1 Direct Changeover 3.2.8.1.2 Parallel Run 3.2.8.1.3 Pilot run 3.2.9 Maintenance 3.2.10 Documentation 3.2.10.1 User or Operator Documentation 3.2.10.2 System Documentation 3.3.Data Flow Diagram (DFD) 3.3.1 Context Diagram 3.3.1 Level 0 Diagram 3.4.Entity Relationship Diagram (ERD) 3.5.Unified Modeling Language (UML) 3.5.1 Use Case Diagram 3.5.1 Class Diagram 3.5.1 Sequence Diagram 3.6.System Overview 3.7.System Phases 3.7.1 Image Capture 3.7.2 Image Preprocessing 3.7.3 Skin and Face Detection 3.7.3.1 Color Space Selection 3.7.3.2 RGB-H-CbCr Color Space 3.7.3.3 Skin Colour Subspace Analysis 3.7.3.4 Skin Colour Bounding Rules 3.7.3.5 Morphological Operations 3.7.3.6 Skin Detection Results 3.7.3.7 Detecting Face Region 3.7.4 Face Features Extraction 3.7.5 Face Features Detection Enhancement 3.7.5.1 Eye Detection Enhancement 3.7.5.2 Nose Detection Enhancement 3.7.5.3 Mouth Detection Enhancement 3.7.5.4 Results After Enhancement 3.7.6 Measuring Distance Between Face Components Chapter 4 - System Tools-----------------------------------------------------------------------71 4.1.Desktop Application 4.1.1 MATLAB
  • 4. 4 4.1.2 Java 4.1.3 MySQL 4.1.4 JDeveloped Studio 4.2.Mobile Application 4.2.1 Android 4.2.2 XML 4.2.3 SQLite 4.2.4 Eclipse IDE 4.3.Web Site 4.3.1 HTML 4.3.2 CSS 4.3.3 PHP 4.3.4 JavaScript 4.3.5 JetBrains PHPStorm Chapter 5 - Results------------------------------------------------------------------------------80 Chapter 6 - Conclusion & Future Work----------------------------------------------------82 Chapter 7 - References-------------------------------------------------------------------------83
  • 5. 5 Acknowledgement Full thanks to supervisor of this project Dr.Noura for the full support of producing this work and sharing experience.
  • 6. 6 Dedication This work is dedicated to every Egyptian wants to raise this country at high ranks.
  • 7. 7 Abstract Human face is one of the most important representative part of humans that has a wide range of appli- cations. Biometrics is the emerging area of bioengineering that it is concerned with the automated method of recognizing person based on a physiological or behavioral characteristic. Using the face an identification system can diffrentiate among persons by just a simple image. The pro- posed system uses Image Processing and Patteren Recognition techniques that allow the detection and identifcation of the applied face image with high accuracy and less computation complexity. The major techniques used is to enhance the quality of the applied image using some preprocessing al- gorithms such as Retinex algorithm then detecting human skin color using different combined color spaces such as RGB, HSV, YCbCr. Skin detection accuracy using RGB-H-CbCr reaches 97%. Then using some extracted features from the face its components can be realized i.e., nose, mouth, eyes, chin can be detected using Viola~Jones al- gorithm and Frangi filter. The last major phase is to extract features based on distances between centers of the detected objects to compare with other face images` features to make the identifcation decision. Extracted feature vector containg 11 distances. Two large face database are used. The Center for Vital Longevity Face and VidTIMIT Audio-Video databses that contains a number of images with different expressions to allow functionality of the sys- tem under different cases. The proposed system has high accuracy over the listed databases that reaches 98%. A full MATLAB- based system is created that apply this system. Integration between MATLAB, Java, Android is used to create a distributable desktop and mobile application that can be used as a security system to login users by their face images rather than using text passwords due to its complexity. Also a web based service is provied that allow sites to use this system to login to their system. The proposed system can also be used in various areas such as detecting criminals and malicious users, enhancing the security by combining it with the surveillance cameras to enable recognizing human fac- es directly, helping families to find lost childrens by using their images to search via the web site, au- tomatic alert if some VIP enters a public organization.
  • 8. 8 Symbols and Abbreviations Symbol # Symbol Abbreviation 1 AFR Automatic Face Recognition 2 PCA Principle Component Analysis 3 EBGM Elastic Bunch Graph Matching 4 RGB Red; Green; Blue 5 HSV Hue; Saturation; Value 6 YCbCr Luminance; Chroma: Blue; Chroma: Red 7 DFD Data Flow Diagram 7 UML Unified Modeling Language 8 ERD Entity Relationship Diagram
  • 9. 9 List Of Tables Table # Caption Page 1 Skin Detection Accuracy By Different Color Models 62 2 ACCURACY OF PROPOSED FRAMEWORK BASED ON THE CENTER FOR VITAL LONGEVITY DATABASE 81 3 ACCURACY OF PROPOSED FRAMEWORK BASED ON THE VIDTIMITAUDIO- VIDEO DATABASE 81
  • 10. 10 List Of Figures Figure # Caption Page 1 Face Recognition Challenges 10 2 Face Illumination Problem 14 3 Pose Problem 15 4 Error rate of face recognition from 1993 to 2006 17 5 Captured face image is compared with a set of images 17 6 Face identification System 18 7 Presenting Identity to users 18 8 Normal face, average face from the AR Face database, and normalized face 19 9 Graph Matching 20 10 Converting RGB to YCbCr 22 11 Visualization of YCbCr in terms of its components 22 12 Face Components Extraction Steps 23 13 Face Detection Stages 23 14 Face Region Detected 24 15 Face Region Division Model 25 16 Eye Map Formulation 25 17 Mouth Map Formulation 25 18 Face Components After Face Division Process 26 19 A procedure of processing face images in the second-order eigenface method 27 20 An illustration of the PCA mixture model 28 21 An iterative EM learning algorithm 28 22 Examples of face image reconstructions 29 23 Binary classifier to segment color image pixels into skin and non-skin 30 24 HSV Color Model 31 25 Skin detection using HSV 32 26 Skin detection scheme using HSV 32 27 Skin after morphological operations and filtering 32 28 RGB Color Model 33 29 An annotation process for skin and non-skin ground truth information 34 30 Transformation of RGB from 3D into 2D matrix 35
  • 11. 11 31 RGB skin color rules 35 32 Histogram of (R-G)/(R+G) and B/(R+G) respectively 36 33 RGB skin color rules based on color histogram 36 34 Examples of skin color classification 37 35 Basic System Components 38 36 Phases of System Development Life Cycle 39 37 DFD Context Diagram 44 38 DFD Level 0 Diagram 45 39 ERD Diagram 46 40 UML Use Case Diagram 47 41 UML Class Diagram 48 42 UML Sequence Diagram 49 43 System Flow Diagram 50 44 Retinex Algorithm Results 51 45 Skin Detection Example 51 46 Examples of input image to the system 52 47 Illumination effect 53 48 Before and after applying histogram equalization over an image 54 49 Applying RETINEX over a degraded color images 55 50 Density plots of Asian skin in different color spaces 57 51 Density plots of Asian, African and Caucasian skin in different color spaces 57 52 System overview for face detecting using skin color 58 53 H-V and H-S subspace plots 59 54 Distribution of the H (Hue) channel 59 55 Distribution of Y, Cb and Cr respectively 60 56 Detected skin after morphological operations 61 57 Skin detection accuracy by different color spaces 62 58 Skin Color Detection 62 59 False alarms in face detection 63 60 Face Features Detection 64 61 Frangi Filter Result 65 62 Eye Pupil Detection 65 63 Simple Eye Diagram 66
  • 12. 12 64 Sclera Detection Example 66 65 Eye Detection Example 66 66 Enhanced Nose Detection 67 67 Enhanced Mouth Detection 68 68 Some faces annotated with bounding box over eyes, nose and mouth 69 69 Cropped face showing distances between each 2 face components 70 70 Diagram showing how to measure distance between left eye center and mouth center 70 71 MATLAB Screen 71 72 Java Application Screen 73 73 JDeveloper Studio Screen 74 74 Eclipse IDE Screen 76 75 Identification Experiments 81 76 Detecting neck as part of the face 82 77 Identical twins 83
  • 13. 13 Chapter 1- Introduction 1.1 Face Recognition Overview A new opportunity for the application of statistical methods is driven by growing interest in biometric performance evaluation. Methods for performance evaluation seek to identify, compare and interpret how characteristics of subjects, the environment and images are associated with the performance of recognition algorithms. Biometrics is the emerging area of bioengineering; it is the automated method of recognizing person based on a physiological or behavioral characteristic. There exist several biometric systems such as sig- nature, finger prints, voice, iris, retina, hand geometry, ear geometry, and face. Among these systems, facial recognition appears to be one of the most universal, collectable, and accessible systems. The field of biometric face recognition blends methods from computer science, engineering and statis- tics, however statistical reasoning has been applied predominantly in the design of recognition algo- rithms. Biometric face recognition, otherwise known as Automatic Face Recognition (AFR), is a particularly attractive biometric approach, since it focuses on the same identifier that humans use primarily to dis- tinguish one person from another: their “faces”. One of its main goals is the understanding of the complex human visual system and the knowledge of how humans represent faces in order to discriminate different identities with high accuracy. Face recognition is concerned with identifying individuals from a collection of face images. Face recognition pertains to a vast range of biometric approaches including fingerprint, iris/retina and voice recognition. Overall, biometric approaches are concerned with identifying individuals by their unique physical characteristics. Traditionally, the use of passwords and Personal Identification Numbers have been employed to formal- ly identify individuals but the disadvantages of such methods are that someone else may use them or they can be easily forgotten. Given these problems, the development of biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition provides a far superior solution when identifying individuals, because not only does it uniquely identify individuals, but it also minimizes the risk of someone else using an- other person’s identity. However, a disadvantage of fingerprint, iris/retina and voice recognition is they require active coopera- tion from individuals. For example, fingerprint recognition requires participants to press their fingers onto a fingerprint reading device, iris/retina recognition requires participants to actively stand in front of a iris/retina scanning device, or, voice recognition requires participants to actively speak into a mi- crophone device. Therefore, face recognition is considered a better approach to other biometrics because it is versatile in the sense that individuals are identified actively, by standing in front of a face scanner, or passively, as they walk past a face scanner. There are also disadvantages of using face recognition. Faces are highly dynamic and can vary consid- erably in their orientation, lighting, scale and facial expression, therefore face recognition is considered a difficult problem to solve. Given these problems, many researchers from a range of disciplines including pattern recognition, computer vision and artificial intelligence have proposed many solutions to minimize such difficulties
  • 14. 14 in addition to improving the robustness and accuracy of such approaches. Among those issues, the fol- lowing are prominent for most systems: the illumination problem, the pose problem, scale variability, images taken years apart, glasses, moustaches, beards, low quality image acquisition, partially occluded faces etc. Figure 1. Face Recognition Challenges The illumination problem in next figure, where the same face appears differently due to the change in lighting. More specifically, the changes induced by illumination could be larger than the differences be- tween individuals, causing systems based on comparing images to misclassify the identity of the input image.
  • 15. 15 Figure 2. Face Illumination Problem The pose problem in next figure where the same face appears differently due to changes in viewing condition. The pose problem has been divided into three categories 1. The simple case with small rotation angles 2. The most commonly addressed case, when there is a set of training image pairs (frontal and rotated images). 3. The most difficult case, when training image pairs are not available and illumination varia- tions are present. Figure 3. Pose Problem
  • 16. 16 Other challenges in face recognition include scale variability when an the face images are taken but with different scale that affect the results, when the applied person has moustaches, if images are taken with a low quality image acquisition that results in different colors than the original, detection of the face in both color and grayscale images, also a major problem is the partially occluded faces where part of the face is hidden due to wearing glasses, hat or other effects. Face recognition has far reaching benefits to corporations, the government and the greater society. The application of face recognition to corporations include access to computers, secure networks and video conferencing; access to office buildings and restricted sections of these buildings; access to storage ar- chives, or, identifying members at conferences and annual general meetings. Specific corporate applications include access and authorization to operate machinery, clocking on and clocking off when beginning and finishing work, assignment of work responsibilities and accountability based on identity, monitoring employees, or, confirming the identity of clients, suppliers and transport and logistics companies when they send and receive packages. Additionally, sales, marketing and adver- tising companies could identify their customers in conjunction with customer relationship management software. Application of face recognition to state and federal governments may include, access to par- liamentary buildings, press conferences and access to secure confidential government documents and reports and doctrines. Specific government use of face recognition can include, Australian Customs ver- ifying the identity of individuals to their passport files and documents, or, state and federal police using face recognition to improve crime prevention and facilitate police activities. Application of face recog- nition to the greater society may include election voting registration, access to venues and functions, verifying the identity of driver’s to their issued licenses and personal identification cards, confirming identity for point of sales transactions like credit cards and confirming identity when accessing funds from an automatic teller machine. Other applications of face recognition include facilitating home secu- rity, or, gaining access to motor vehicles. 1.2 Face Recognition Applications There are a large number of applications of face recognition: Easy people tagging Facebook’s automatic tag suggestion feature, which used face recognition to suggest people that might want to tag in different photos, got people hot under the collar earlier this year. Face recognition for people tagging certainly saves time. It’s currently available in Apple’s iPhoto, Google’s Picasa and on Facebook. Gaming Image and face recognition is bringing a whole new dimension to gaming. Microsoft’s Kinect’s ad- vanced motion sensing capabilities have given the Xbox 360 a whole new lease of life and opened up gaming to new audiences by completely doing away with hardware controllers. Security Face recognition could one day replace password logins on our favorite apps – imagine logging in to Twitter with the face. Marketing
  • 17. 17 Face recognition is gaining the interest of marketers. A webcam can be integrated into a television and detect any face that walks by. The system then calculates the race, gender, and age range of the face. Once the information is collected, a series of advertisements can be played that is specific toward the detected race/gender/age. Due to the high importance of face recognition as a research field, the error rate has decreased suddenly as shown in the following figure: Figure 4. Error rate of face recognition from 1993 to 2006 The face recognition problem can be divided into two main stages 1. Face verification (or authentication). 2. Face identification (or recognition). Using a simple camera, an image is detected that includes identifying and locating a face in an image. The recognition stage is the second stage; it includes feature extraction, where important information for discrimination is saved, and the matching, where the recognition result is given with the aid of a face database. Figure 5. Captured face image is compared with a set of images Identification and authentication are two terms that describe the major phases of the process of face recognition system. The terms are often used synonymously, but authentication is typically a more in- volved process than identification. Identification is what happens when one profess to have a certain identity in the system, while authentication is what happens when the system determines that you are who you claim to be. Both processes are usually used in tandem, with identification taking place before authorization, but they can stand alone, depending on the nuances of the system.
  • 18. 18 1.3.1 Face Identification Identification is the process of presenting an identity to a system. It is done in the initial stages of gain- ing access to the system and is what happens when one claim to be a particular system user. The claim can take the form of providing your username during the login process; placing your finger on a scan- ner; giving your name on a guest list or any other format in which you claim an identity with the aim of gaining access. As in next figure, before users try to enter a system or a building, a camera is used to capture their face images, then using a set of test images the face is identified. Figure 6. Face identification System Identification is not necessary for some systems, such as ATM cards, where anyone with the correct code can gain access to your account without identifying themselves. Authentication is the process of validating an identity provided to a system. This entails checking the validity of the identity prior to the authorization phase. 1.3.2 Face Authentication The process of checking the validity of the evidence provided to support the claimed identity must be sufficiently robust to detect impostors. Authentication usually occurs after identification is complete, such as when you supply a password to support a username during the login process. It can happen, however, at the same time as the identification process. In next figure, the captured images are applied to the system and then using some processing the identi- fied persons information such as name, age are applied to the viewer.
  • 19. 19 Figure 7. Presenting Identity to users Identification and authentication are not easily distinguished, especially when both occur in one trans- action. They may appear synonymous, but they are two different processes. The primary difference between them is that identification relates to the provision of an identity, while authentication relates to the checks made to ensure the validity of a claimed identity. Simply put, the identification process involves making a claim to an identity, whereas the authentication process in- volves proving that identity. Identification occurs when you type your username into a login screen, because you have claimed to be that person, while authentication occurs after you have typed in a password and hit the “login” button, at which time the validity your claim to the username is determined. Some common authentication methods include smartcards, biometrics, RSA tokens and passwords, while common identification methods are usernames and smartcards. Recognition systems have been divided into two major classes 1. Holistic methods. 2. Local feature-based methods. 1.4.1 Holistic Face Recognition In holistic approaches, the whole face image is used as the raw input to the recognition system. An ex- ample is the well-known PCA-based technique introduced by Kirby and Sirovich, followed by Turk and Pentland. Holistic processing is generally accepted to be unique to faces and provides strong support for the notion that faces are processed differently relative to all other object categories. Features found from holistic approaches represent the optimal variances of pixel data in face images that are used to uniquely identify one individual to another. Holistic face recognition utilizes global information from faces to perform face recognition. The global information from faces is fundamentally represented by a small number of features which are directly derived from the pixel information of face images. These small number of features distinctly capture the variance among different individual faces and therefore are used to uniquely identify individuals. Next figure shows an example of such method used to find the average face as a global features. Figure 8. Normal face, average face from the AR Face database, and normalized face, which is difference between the normal face and the average face. For each face in the database of M faces, the average face feature is calculated by this equation:
  • 20. 20 Once the average face is found, the normalized face is calculated by subtracting the average face for the whole dataset and each individual face 1.4.2 Local Features Face Recognition Local features are extracted, such as eyes, nose and mouth. Their locations and local statistics (appear- ance) are the input to the recognition stage. An example of this method is Elastic Bunch Graph Match- ing (EBGM). Feature based face recognition uses a priori information or local features of faces to select a number of features to uniquely identify individuals. Local features include the eyes, nose, mouth, chin and head outline, which are selected from face images where these features are used to uniquely identify individ- uals. One of the local features used is the graph matching as shown in next figure. Figure 9. Graph Matching Elastic Bunch Graph Matching recognizes faces by matching the probe set represented as the input face graphs, to the gallery set that is represented as the model face graph. Fundamental to the Elastic Bunch Graph Matching is the concept of nodes. Essentially, each node of the input face graph is represented by a specific feature point of the face. For example, a node represents an eye and another node represents the nose and the concept continues for representing the other face features The model face graph represents the gallery set only uses one model face graph to represent the entire gallery set. However, the model face graph can be conceptually thought of as a number of input face graphs stacked on top of each other and concatenated to form one model face graph, with the exception that this is applied to the gallery set instead of the probe set. Therefore, this would allow the grouping of the same types of face features from different individuals. For example, the eyes of different individuals could be grouped together to form the eye feature point for the model face graph and the noses of different individuals can be grouped together to form the nose feature point for the model face graph. Given the definition for the input face graph and model face graph, to determine the identity for the in- put face graph is to achieve the smallest distance in relation to the the model face graph for a particular gallery face.
  • 21. 21
  • 22. 22 Chapter 2 - Background / Related Work Identification of faces was work field of some researchers in previous years because it can facilitate the recognition process, and this problem was addressed by several methods. Simple review about some solutions are shown to readers in order to have a good background about this topic. 2.1 YCbCr Color Space Overview The YCbCr color space is widely used for digital video. In this format, luminance information is stored as a single component (Y), and chrominance information is stored as two color-difference components (Cb and Cr). Cb represents the difference between the blue component and a reference value. Cr repre- sents the difference between the red component and a reference value. YCbCr is sometimes abbreviated to YCC. Y′CbCr is often called YPbPr when used for analog compo- nent video, although the term Y′CbCr is commonly used for both systems, with or without the prime. Y′CbCr is not an absolute color space;rather, it is a way of encoding RGB information. The actual color displayed depends on the actual RGB primaries used to display the signal. Therefore a value expressed as Y′CbCr is predictable only if standard RGB primary chromaticities are used. Figure 10. Converting RGB to YCbCr Individual color components of YCbCr color space are luma Y, chroma Cb and chroma Cr. Chroma Cb corresponds to the U color component, and chroma Cr corresponds to the V component of a general YUV color space. The next figure shows a visualization of YCbCr color space Figure 11. Visualization of YCbCr in terms of its components 2.2 Face Components Extraction Based On YCbCR [1] Principally, the research is conducted in automated steps illustrated in next figure. The first step is face detection based on skin color model. The result is then cropped to normalize the face re-
  • 23. 23 gion. Next, extraction of eyes, nose, and mouth components is conducted, and distances between them measured. Figure 12. Face Components Extraction Steps 2.2.1 Face Skin Model Detection Ninety skin samples of Indonesian faces are used. The extraction step is conducted by decreasing the luminance level to reduce lighting effects, such that the original image is obtained. Decreasing the luminance level is conducted by image conversion from RGB to YCbCr or chromatic color. After the Cb and Cr values are obtained then the low pass filter is conducted to the image in order to reduce noise. The reshape function is next applied to Cb and Cr values which turn them into row vectors. Face detection process begins with skin model detection process by applying the threshold value to get the binary image as shown in the following figure: Figure 13. Face Detection Stages 2.2.2 Face Cropping Process on Normal Static Image The binary image obtained from threshold process is further processed to take and crop the face part of the image. The face image is the part in white color (pixel value = 1). These processes are as the following steps: 1. Separating the skin part of the face from those of non face part, such as arms, hands, shoulders. 2. Determining the hole area of the picture which indicates the face region. The face region is de- tected by the following equation Where E is Euler number, C is related component number and H is hole number in the region. By using this equatio region of the face. 3. Finding the statistic of color value between the hole area of the picture (which indicates the face area) and the face template picture after the hole that represents the face region has been deter- mined. The following equations are used to find the center of mass in determining the face part position of the picture:
  • 24. 24 Figure 14. Face Region Detected 2.2.3 Extraction Process and Measurement of Distances between Face Components The face region image as a result of face detection process is further processed to obtain the face components and the distances between them. This is conducted by extracting the eyes, nose, and mouth components. The extraction determines the components’ locations, and is done on YCbCr color space to separate the luminance and chrominance components in order to reduce the lighting effect. Distances measured are between: 7. Nose height 8. Nose width The face extraction process in this research is conducted in three stages: 1. Face division. 2. Face components detection and extraction. 3. Measurement/calculation of distances between face components. The face image from which the components will be extracted is first processed by dividing it into re- gions, in order to narrow down the area for detection. The extraction result can then be expected to be more accurate. The division also minimizes the probability of other components be detected. Detection is conducted by computing the components of color space in regions assumed to be the locations of face components. These are extracted to determine the location of the components. The process of face components extraction is conducted next. 2.2.4 Face Division The face is divided into three parts: face, eyes, and mouth regions. The face image to be di- vided must have forehead and chin regions minimum, and neck region maximum. Some improve- ments are done on the mouth region to get better result from the previous research. The former re- search divided the mouth region as illustrated in next figure. An approximate position of the
  • 25. 25 mouth is determined as the center of the region, vertically and horizontally. There exists a neck part that affects the mouth component position in the mouth region as the mouth component in is not always located vertically at the center of the region as is illustrated in next figure. Figure 15. Face Region Division Model 2.2.5 Face Component Detection and Extraction After the face is divided into regions, its components will be extracted. 1. Eyes extraction is done by forming an eye map as shown in the following figure. Figure 16. Eye Map Formulation 2. Mouth extraction is done by forming a mouth map as shown in the following figure: Figure 17. Mouth Map Formulation Based on the detected mouth and eyes locations, the mouth region is detected. After the whole extraction process is completed then each face component is surrounded by a
  • 26. 26 bounding box and distances between components of the face are calculated as shown in the following figure. The face distances are obtained by calculating the difference between every point’s coordi- nates if there exists a perfect vertical=horizontal lines connecting those points. Otherwise, the Py- thagoras theorem is used, since additional lines can be drawn from the coordinates to form a right triangle. Face component distance is the diagonal side of the triangle. The value is rounded to the nearest integer. Figure 18. Face Components After Face Division Process 2.3 Face Recognition Based On Eigenfaces [2] The eigenface method is a well-known template matching approach. Face recognition using the eigen- face method is performed using feature values that are projected by one eigenface set obtained from principal component analysis (PCA). In addition, the eigenface method can be used for face coding (face reconstruction), which is the technology of extracting a small face code in order to reconstruct face images. The eigenface method uses the PCA having the property of optimal reconstruction. However, the single eigenface set is not enough to represent complicated face images with large varia- tions of poses and/or illuminations, and it is often not effective for PCA to be used for analyzing a non- linear structure such as face images, because PCA is inherently a linear method. To overcome this weakness, a mixture-of-eigenfaces method that uses a mixture of multiple eigenface sets obtained from the PCA mixture model for an effective representation of face images. The proposed method has been motivated by the idea of the PCA mixture model that the classification performance can be improved by modeling each class into a mixture of several components and by performing the classification in the compact and decorrelated feature space. 2.3.1 The eigenface and the second-order eigenface method PCA is a well-known technique of multivariate linear data analysis. The central idea of PCA is to re- duce the dimensionally of a data set while retaining the variations in the data set as much as possible. In PCA, a set of the N-dimensional observation vector X is reduced to a set of the N 1 -dimensional fea- ture vector Y by a transformation matrix U. Because a face image can be effectively reconstructed from the eigenfaces. However, in some condi- tions, the set of eigenfaces do not represent the faces well. For example, under various lighting condi- tions, the initial principal components in the set of eigenfaces are mainly used to reflect the lighting fac- tors in the face image. In this situation, only using the set of eigenfaces is not effective for representing
  • 27. 27 the face images. To overcome this limitation, a second-order eigenface method that use not only the set of eigenfaces for the original face image but also the set of the second-order eigenfaces of the residual face images that are defined by the differences between the original face images and the reconstructed images obtained from the set of eigenfaces. Figure 19. A procedure of processing face images in the second-order eigenface method 2.3.2 PCA mixture model and EM learning Both the eigenface method and the second-order eigenface method use only one set of eigenfaces. However, it is often not enough to represent the face images with large variations of poses and/or illu- minations by the set of eigenfaces. A second-order mixture-of-eigenfaces method is used that combines the second-order eigenface method and the mixture-of-eigenfaces method. It provides a couple of mixtures of multiple eigenface sets. The PCA mixture model is used to estimate a density function. Basically, the central idea comes from the combination of mixture models and PCA. In a mixture model, a class is partitioned into K clusters and a density function of the N-dimensional observation vector X is represented by a linear combination of component densities of K partitioned clusters by the following equation: The following figure illustrates a PCA mixture model where the number of mixture components K = 2, the dimension of feature vectors N = 2, the line segments in each cluster represent the two column vec- tors U1 and U2 , and the intersection of two line segments represents a mean vector m. To use the PCA mixture model for estimating the data distribution, Both the appropriate partitioning of the class and the estimation of model parameters of the partitioned clusters should be performed. This task can be performed successfully due to the important property of the mixture model that, for many choices of component density function, they can approximate any continuous density to arbitrary accu- racy if the model has a sufficiently large number of components and the parameters of the model are chosen appropriately.
  • 28. 28 Figure 20. An illustration of the PCA mixture model 2.3.3 The second-order mixture-of-eigenfaces method The K partitioned clusters through the EM learning as shown in next figure over the whole face images, where each cluster is represented by the independent component’s parameters. To represent the face im- ages accurately, the second-order eigenface method can be applied to each cluster independently. This is called the second-order Mixture-of-eigenfaces method in that the face images are represented by a mix- ture of several components and each partitioned component is represented by a couple of approximate and residual eigenface sets. Figure 21. An iterative EM learning algorithm
  • 29. 29 Next figure shows some examples of several reconstructed face images in the proposed second-order mixture-of-eigenfaces method where the face images in the (a)–(d) rows correspond the original imag- es, the second-order reconstructed images, the first-order approximate images, and the residual images, respectively. Figure 22. Examples of face image reconstructions 2.4 Skin Detection Skin detection in images is a theme that is present in many applications. This is the first step for faces recognition, for example. Other application is for naked detection, in the Internet. This work presents a system for automatic skin detection. Skin color detection is frequently been used for searching people, face detection, pornographic filtering and hand tracking. The presence of skin or non-skin in digital image can be determined by manipulat- ing pixels’ color and/or pixels’ texture. The main problem in skin color detection is to represent the skin color distribution model that is invariant or least sensitive to changes in illumination condition. Another problem comes from the fact that many objects in the real world may possess almost similar skin-tone color such as wood, leather, skin-colored clothing, hair and sand. Moreover, skin color is different between races and can be different from a person to another, even with people of the same ethnicity. Finally, skin color will appear a little different when different types of camera are used to capture the object or scene. Skin colour is produced by a combination of melanin, haemoglobin, carotene, and bilirubin. Haemoglobin gives blood a reddish color or bluish color while carotene and bilirubin give skin a yel- lowish appearance. The amount of melanin makes skin appear darker. Due to its vast application in many areas, skin color detection research is becoming increasingly popular among the computer vision research community. Today, skin color detection is often used as preprocessing in some appli- cations such as face detection, pornographic image detection, hand gesture analysis, people detec- tion, content-based information retrieval. The skin color fills only a small fraction from the whole color model and thus, any frequent appearance in an image could be a clue to human presence. A skin color classifier defines a decision boundary of
  • 30. 30 the skin color pixels in the selected color model based on database of skin-colored pixels. This classifi- er can be created using different techniques such as k-means, Bayesian, maximum entropy, neural net- work and others. Figure 23. Binary classifier to segment color image pixels into skin and non-skin Skin color provides computationally effective, robust information against rotations, scaling, and partial occlusions. Skin color can also be used as complimentary information to other features such as shape, texture, and geometry. Detecting skin-colored pixels, although it seems as a straightforward and easy task, but it has been proven to be quite challenging for many reasons. This is because the appearance of a skin color in an image depends on the illumination conditions where the image was captured. Therefore, a major challenge in skin color detection is to represent the skin color distribution model that is invariant or least sensitive to changes in illumination condition. In addition, the choice of color model used for skin color detection modeling could significantly affects the performance of any skin color distribution methods. Another challenge comes from the fact that many objects in the real world may have almost similar skin-tone color such as wood, leather, skin-colored clothing, hair, sand, etc. Moreover, skin color is different between human races and can be different from a person to another, even with people of the same ethnicity. Finally, skin color will appear a little different when different types of camera are used to capture the object or scene. The main problem of skin color detection is to develop a skin color detection algorithm or classifier that is robust to the large variations in color appearance. Some objects may have al- most similar skin-tone color which easily confused with skin color. A skin color can be vary in ap- pearance base on changes in background color, illumination, and location of light sources, and other objects within the scene may cast shadows or reflect additional light. There are no specific methods or techniques that have been proposed to robust skin color detec- tion arise under varying lighting conditions, especially when the illumination color changes. This con-
  • 31. 31 dition may occur in both out-door and in-door environments with mixture of day light and artificial light. Many non-skin color objects are overlapping with skin color, and most of pixel-based method pro- posed in the literature cannot solve this problem. This problem is difficult to be solved because skin-like materials are those objects that appear to be skin-colored under a certain illumination condition. 2.4.1 HSV Color Space Overview Hue, Saturation, Value or HSV is a color model that describes colors in terms of their shade (saturation or amount of gray) and their brightness (value or luminance). The HSV color wheel may be depicted as a cone or cylinder as shown in the following figure: Figure 24. HSV Color Model The hue (H) of a color refers to which pure color it resembles. All tints, tones and shades of red have the same hue. Hues are described by a number that specifies the position of the corresponding pure color on the color wheel, as a fraction between 0 and 1. Value 0 refers to red; 1/6 is yellow; 1/3 is green; and so forth around the color wheel. The saturation (S) of a color describes how white the color is. A pure red is fully saturated, with a satu- ration of 1; tints of red have saturations less than 1; and white has a saturation of 0. The value (V) of a color, also called its lightness, describes how dark the color is. A value of 0 is black, with increasing lightness moving away from black. The outer edge of the top of the cone is the color wheel, with all the pure colors. The H parameter de- scribes the angle around the wheel. The S (saturation) is zero for any color on the axis of the cone; the center of the top circle is white. An increase in the value of S corresponds to a movement away from the axis. The V (value or lightness) is zero for black. An increase in the value of V corresponds to a movement away from black and toward the top of the cone. The HSV color space is quite similar to the way in which humans perceive color. HSV separates luma, or the image intensity, from chroma or the color information. This is very useful in many applications. 2.4.2 Skin Detection using HSV [3] First, the image in RGB was converted to HSV color space, because it is more related to human col- or perception . The skin in channel H is characterized by values between 0 and 50, in the channel
  • 32. 32 S from 0.23 to 0.68. But the used component to segment skin pixels is the channel H with values rang- ing between 6 and 38 and a mix of morphological and smooth filters. Figure 25. Skin detection using HSV But the resulting image has many noises in the classification of pixels like skin and non-skin. Next step minimize these noises, using a 5x5 structuring element in morphological filters. The structuring el- ement with a dilatation filter that expands the areas in the skin regions. After that the same structur- ing element was used to erode the image and reduce all the imperfections that the dilatation created. These techniques were used, by approximation, to fill all the spaces that were by H channel range supposed that is skin or non-skin. Then, a 3x3 median filter was used to soften more the results achieved by the dilatation and erosion, because these techniques adulterated regions in contours. Figure 26. Skin detection scheme using HSV Finally, only skin regions are represented as white pixels. This result is shown in the following figure.
  • 33. 33 Figure 27. Skin after morphological operations and filtering 2.4.4 RGB Color Space Overview There are several ways to specify colors. The most common of these is the RGB color model. The RGB model defines a color by giving the intensity level of red, green and blue light that mix together to cre- ate a pixel on the display. With most of today's displays, the intensity of each color can vary from 0 to 255, which gives 16,777,216 different colors. RGB color space is the most commonly used color space in digital images. The RGB color model is an additive color modeling which red, green, and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primarycolors, red, green, and blue. It encodes colors as an additive combination of three primary colors: red(R), green (G) and blue (B). RGB Color space is often visualized as a 3D cube where R, G and B are the three perpendicular axes. One main advantage of the RGB space is its simplicity. However, it is not perceptually uniform, which means distances in the RGB space do not linearly correspond to human perception. In addition, RGB color space does not separate luminance and chrominance, and the R,G, and B components are highly correlated. The luminance of a given RGB pixel is a linear combination of the R, G, and B values. Therefore, changing the luminance of a given skin patch affects all the R, G, and B components. In oth- er words, the location of a given skin patch in the RGB color cube will change based on the intensity of the illumination under which such patch was imaged! This results in a very stretched skin color cluster in the RGB color cube. RGB is extensively used in skin detection literature because of its simplicity. The main purpose of the RGB color model is for the sensing, representation, and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography. The RGB color model already had a solid theory behind it, based in human perception of colors. RGB is a device-dependent color model: different devices detect or reproduce a given RGB value dif- ferently, since the color elements and their response to the individual R, G, and B levels vary from manufacturer to manufacturer, or even in the same device over time. To form a color with RGB, three light beams (one red, one green, and one blue) must be combined. Each of the three beams is called a component of that color, and each of them can have an arbitrary in- tensity, from fully off to fully on, in the mixture. The RGB color model is additive in the sense that the three light beams are added together, and their light spectra add, wavelength for wavelength, to make the final color's spectrum. Zero intensity for each component gives the darkest color (no light, consid- ered the black), and full intensity of each gives a white.
  • 34. 34 Figure 28. RGB Color Model 2.4.5 Skin Detection using RGB [4][5] To detect skin color using RGB, 3 main steps followed: 1. Data preparation. 2. Skin color classifiers modeling. 3. Testing and evaluation. Data preparation This step involves collecting a large number of human skin images from different databases such as Compaq dataset, Sigal dataset, Testing dataset for skin detection (TDSD), and db-skin dataset. Image Segmentation Image segmentation is the process of dividing an image into multiple parts. This is typically used to identify objects or other relevant information in digital images. An accurate skin segmentation analysis is considered important in order to have images with the exact ground truth information and to get optimum result in skin detection experiment. Each of the test imag- es was segmented manually using Adobe Photoshop software. The regions of skin pixels were selected using the Magic Wand tool, which is available in Adobe Pho- toshop software. This tool enable user to select a consistently colored area without having to trace its outline. This tool also allows user to interactively segment regions of skin by clicking the area needed. If contiguous area is selected, all adjacent pixels within the tolerance range within the color re- gion will be selected. The tolerance range defines on how similar in color of a pixel within the region must be filled. Its value can be adjusted accordingly based on skin image, while regions of skin with complex shape can be segmented quickly. If the region of skin and non-skin are too difficult to seg- ment because of almost skin and non-skin pixels are similar color, then manual segmentation of skin and non-skin area using pen tracing tool is employed. By using this tool, the user needs to trace skin and non-skin area, manually. The following figure illustrates the skin and non-skin annotation to obtain ground truth skin and non-skin information.
  • 35. 35 Figure 29. An annotation process for skin and non-skin ground truth information This process has to be done carefully to exclude the eyes, hair, mouth opening, eyebrows, moustache and other materials covered on skin area. The RGB value of skin and non-skin areas were mapped to [255 255 255] and [0 0 0], respectively. Data Transformation Before skin and non-skin pixels were used for experiments, each pixel of skin and non-skin portion were transformed into 2-dimensional matrix. Figure 30. Transformation of RGB from 3D into 2D matrix Skin Color Modeling Skin color distribution modeling is a third step after the choice of color model has been made and data transformation in skin color detection algorithm development. A new technique called RGB ratio model have been introduced. RGB Ratio is one of the explicitly defined skin region methods. RGB ratio will be formulated by examine and observation from histogram and scatter plot.
  • 36. 36 Pixel is skin color pixel if the four rules in the following figure holds: Figure 31. RGB skin color rules This rule can be interpreted as the range of R value is from 96 to 255, the range of G value is from 41 to 239, and the range of B value is 21 to 254. The histogram of ratio of difference between R and G over the sum of R and G, and the ratio of B over sum of R and G are plotted from skin pixel of training dataset as shown in the following figure. Figure 32. Histogram of (R-G)/(R+G) and B/(R+G) respectively The new rule for skin color have been developed based on histogram as follows: Figure 33. RGB skin color rules based on color histogram Testing and Evaluation The performance of skin color detection algorithm can be measured by two methods 1. Quantitative techniques 2. Qualitative techniques The quantitative method consists of two techniques, i.e. Receiver Operating Characteristics (ROC) and the true and false positive. Qualitative technique is based on observe the ability of skin color classifier to classify skin and non- skin pixels from images. The true positive (TP) and false positive (FP) are statistical measures of the performance of a binary classification test. Binary classification is the task of classifying the members of a given set of objects into two groups on the basis of whether they have some property or not. The TP is also called sensitivity, measures the proportion of actual positives, which are correctly identi-
  • 37. 37 fied as such. Meanwhile, FP measures the proportion of actual negative which are incorrectly identified. The FP rate is equal to the significance level. The specificity of the test is equal to one minus the FP rate (1 – FP). In case of skin color detection, the performance of skin color detection algorithm can be trans- lated to following equation: Figure 34. Examples of skin color classification
  • 38. 38 Chapter 3 - System Analysis 3.1 System Analysis Overview Systems are created to solve problems. One can think of the systems approach as an organized way of dealing with a problem. In this dynamic world, the subject System Analysis and Design (SAD), mainly deals with the software development activities. Next is to define a system, explain the different phases of system development life cycle, enumerate the components of system analysis and explain the components of system designing. 3.1.1 What is System A collection of components that work together to realize some objectives forms a system. Basically there are three major components in every system, namely input, processing and output. Figure 35. Basic System Components In a system the different components are connected with each other and they are interdependent. For example, human body represents a complete natural system. Many national systems such as political system, economic system, educational system and so forth bound the process. The objective of the sys- tem demands that some output is produced as a result of processing the suitable inputs. A well-designed system also includes an additional element referred to as ‘control’ that provides a feedback to achieve desired objectives of the system. 3.1.2 System Life Cycle System life cycle is an organizational process of developing and maintaining systems. It helps in estab- lishing a system project plan, because it gives overall list of processes and sub-processes required for developing a system. System development life cycle means combination of various activities. In other words it can be regarded that various activities put together are referred as system development life cy- cle. In the System Analysis and Design terminology, the system development life cycle also means software development life cycle. Following are the different phases of system development life cycle:  Preliminary study  Feasibility study  Detailed system study  System analysis  System design  Coding  Testing  Implementation  Maintenance The next figure shows the different phases in the system development life cycle:
  • 39. 39 Figure 36. Phases of System Development Life Cycle 3.2 Phases of system development life cycle Following is the description of the system development life cycle. 3.2.1 Preliminary System Study Preliminary system study is the first stage of system development life cycle. This is a brief investigation of the system under consideration and gives a clear picture of what actually the physical system is? In practice, the initial system study involves the preparation of a ‘System Proposal’ which lists the Prob- lem Definition, Objectives of the Study, Terms of reference for Study, Constraints, Expected benefits of the new system, etc. in the light of the user requirements. The system proposal is prepared by the Sys- tem Analyst (who studies the system) and places it before the user management. The management may accept the proposal and the cycle proceeds to the next stage. The management may also reject the proposal or request some modifications in the proposal. In summary, the system study phase pass- es through the following steps:  Problem identification and project initiation  Background analysis  Inference or findings (system proposal) 3.2.2 Feasibility Study In case the system proposal is acceptable to the management, the next phase is to examine the feasibil- ity of the system. The feasibility study is basically the test of the proposed system in the light of its workability, meeting user’s requirements, effective use of resources and of course, the cost effective- ness. These are categorized as technical, operational, economic and schedule feasibility. The main goal of feasibility study is not to solve the problem but to achieve the scope. In the process of feasi- bility study, the cost and benefits are estimated with greater accuracy to find the Return on Investment (ROI). This also defines the resources needed to complete the detailed investigation. The result is a feasibility report submitted to the management. This may be accepted or accepted with modifications or rejected. The system cycle proceeds only if the management accepts it. 3.2.3 Detailed System Study The detailed investigation of the system is carried out in accordance with the objectives of the proposed system. This involves detailed study of various operations performed by a system and their relation- ships within and outside the system. During this process, data are collected on the available files, deci-
  • 40. 40 sion points and transactions handled by the present system. Interviews, on-site observation and ques- tionnaire are the tools used for detailed system study. Using the following steps it becomes easy to draw the exact boundary of the new system under consideration:  Keeping in view the problems and new requirements.  Workout the pros and cons including new areas of the system. All the data and the findings must be documented in the form of detailed data flow diagrams (DFDs), data dictionary, logical data structures and miniature specification. The main points to be discussed in this stage are:  Specification of what the new system is to accomplish based on the user requirements.  Functional hierarchy showing the functions to be performed by the new system and their re- lationship with each other.  Functional network, which are similar to function hierarchy but they highlight the func- tions which are common to more than one procedure.  List of attributes of the entities – these are the data items which need to be held about each entity (record) 3.2.4 System Analysis Systems analysis is a process of collecting factual data, understand the processes involved, identifying problems and recommending feasible suggestions for improving the system functioning. This involves studying the business processes, gathering operational data, understand the information flow, find- ing out bottlenecks and evolving solutions for overcoming the weaknesses of the system so as to achieve the organizational goals. System Analysis also includes subdividing of complex process involv- ing the entire system, identification of data store and manual processes. The major objectives of systems analysis are to find answers for each business process: What is being done, How is it being done, Who is doing it, When is he doing it, Why is it being done and How can it be improved? It is more of a thinking process and involves the creative skills of the System Analyst. It attempts to give birth to a new efficient system that satisfies the current needs of the user and has scope for future growth within the organizational constraints. The result of this process is a logical system de- sign. Systems analysis is an iterative process that continues until a preferred and acceptable solution emerges. 3.2.5 System Design Based on the user requirements and the detailed analysis of the existing system, the new system must be designed. This is the phase of system designing. It is the most crucial phase in the de- velopments of a system. The logical system design arrived at as a result of systems analysis is converted into physical system design. Normally, the design proceeds in two stages: 1. Preliminary or General Design 2. Structured or Detailed Design 3.2.5.1 Preliminary or General Design In the preliminary or general design, the features of the new system are specified. The costs of imple- menting these features and the benefits to be derived are estimated. If the project is still considered to be feasible, next design stage will be taken into regard. 3.2.5.2 Structured or Detailed Design In the detailed design stage, computer oriented work begins in earnest. At this stage, the design of the
  • 41. 41 system becomes more structured. Structure design is a blue print of a computer system solution to a given problem having the same components and inter-relationships among the same components as the original problem. Input, output, databases, forms, codification schemes and processing specifica- tions are drawn up in detail. In the design stage, the programming language and the hardware and soft- ware platform in which the new system will run are also decided. There are several tools and techniques used for describing the system design of the system. These tools and techniques are:  Flowchart  Data flow diagram (DFD)  Data dictionary  Structured English  Decision table  Decision tree The system design involves: i. Defining precisely the required system output ii. Determining the data requirement for producing the output iii. Determining the medium and format of files and databases iv. Devising processing methods and use of software to produce output v. Determine the methods of data capture and data input vi. Designing Input forms vii. Designing Codification Schemes viii. Detailed manual procedures ix. Documenting the Design 3.2.6 Coding The system design needs to be implemented to make it a workable system. This demands the coding of design into computer understandable language, i.e., programming language. This is also called the pro- gramming phase in which the programmer converts the program specifications into computer instruc- tions. It is an important stage where the defined procedures are transformed into control specifications by the help of a computer language. The programs coordinate the data movements and control the entire process in a system. It is generally felt that the programs must be modular in nature. This helps in fast development, maintenance and future changes, if required. 3.2.7 Testing Before actually implementing the new system into operation, a test run of the system is done for removing the bugs, if any. It is an important phase of a successful system. After codifying the whole programs of the system, a test plan should be developed and run on a given set of test data. The output of the test run should match the expected results. Sometimes, system testing is considered a part of implementation process. Using the test data following test run are carried out: 1. Program test 2. System test 3.2.7.1 Program test When the programs have been coded, compiled and brought to working conditions, they must be indi- vidually tested with the prepared test data. Any undesirable happening must be noted and debugged (er-
  • 42. 42 ror corrections) 3.2.7.2 System Test After carrying out the program test for each of the programs of the system and errors removed, then system test is done. At this stage the test is done on actual data. The complete system is executed on the actual data. At each stage of the execution, the results or output of the system is analyzed. During the result analysis, it may be found that the outputs are not matching the expected output of the system. In such case, the errors in the particular programs are identified and are fixed and further tested for the expected output. When it is ensured that the system is running error-free, the users are called with their own actual data so that the system could be shown running as per their requirements. 3.2.8 Implementation After having the user acceptance of the new system developed, the implementation phase begins. Im- plementation is the stage of a project during which theory is turned into practice. The major steps involved in this phase are:  Acquisition and Installation of Hardware and Software  Conversion  User Training  Documentation The hardware and the relevant software required for running the system must be made fully opera- tional before implementation. The conversion is also one of the most critical and expensive activities in the system development life cycle. The data from the old system needs to be converted to operate in the new format of the new system. The database needs to be setup with security and recovery proce- dures fully defined. During this phase, all the programs of the system are loaded onto the user’s computer. After loading the system, training of the user starts. Main topics of such type of training are:  How to execute the package  How to enter the data  How to process the data (processing details)  How to take out the reports 3.2.8.1 Changeover After the users are trained about the computerized system, working has to shift from manual to comput- erized working. The process is called ‘Changeover’. The following strategies are followed for changeover of the system. 3.2.8.1.1 Direct Changeover This is the complete replacement of the old system by the new system. It is a risky approach and re- quires comprehensive system testing and training. 3.2.8.1.2 Parallel run In parallel run both the systems, i.e., computerized and manual, are executed simultaneously for cer- tain defined period. The same data is processed by both the systems. This strategy is less risky but more expensive because of the following:  Manual results can be compared with the results of the computerized system.
  • 43. 43  The operational work is doubled.  Failure of the computerized system at the early stage does not affect the working of the organization, because the manual system continues to work, as it used to do. 3.2.8.1.3 Pilot run In this type of run, the new system is run with the data from one or more of the previous periods for the whole or part of the system. The results are compared with the old system results. It is less expen- sive and risky than parallel run approach. This strategy builds the confidence and the errors are traced easily without affecting the operations. 3.2.9 Maintainence Maintenance is necessary to eliminate errors in the system during its working life and to tune the sys- tem to any variations in its working environments. It has been seen that there are always some errors found in the systems that must be noted and corrected. It also means the review of the system from time to time. The review of the system is done for:  Knowing the full capabilities of the system  Knowing the required changes or the additional requirements  Studying the performance. If a major change to a system is needed, a new project may have to be set up to carry out the change. The new project will then proceed through all the above life cycle phases. 3.2.10 Documentation The documentation of the system is also one of the most important activity in the system develop- ment life cycle. This ensures the continuity of the system. There are generally two types of documen- tation prepared for any system. These are: 1. User or Operator Documentation 2. System Documentation 3.2.10.1 User or Operator Documentation The user documentation is a complete description of the system from the users point of view detailing how to use or operate the system. It also includes the major error messages likely to be en- countered by the users. 3.2.10.2 System Documentation The system documentation contains the details of system design, programs, their coding, system flow, data dictionary, process description, etc. This helps to understand the system and permit changes to be made in the existing system to satisfy new user needs.
  • 44. 44 3.3 DFD 3.3.1 Context Diagram The context diagram is the highest level in a data flow diagram and contains only one process, repre- senting the entire system. The process is given the number zero. All external entities are shown on the context diagram, as well as major data flow to and from them. The diagram does not contain any data stores and is fairly simple to create, once the external entities and the data flow to and from them are known to analysts. Figure 37. DFD Context Diagram As shown in the context diagram, the system will interact with the mobile or desktop devices by re- questing a face image for a user. Then the devices will send the requested image back to the user by capturing an image using their cameras. Face login system has the responsibility to make all the processing required to detect faces and follow all steps presented in the system overview. The devices will provide the system with information about their location and gives alert in case the captured image was of a malicious user. Also the system interact with the user by taking images from it by different means such as uploading an
  • 45. 45 image and providing information about it such as their timing and location. There is also a website that accepts different user uploads that was manually created and also images captured automatically from the system. Website is connected to a large database hat stores all of these images and allow normal Internet users to search about malicious users from its collection. 3.3.2 Level 0 Diagram More detail than the context diagram permits is achievable by “exploding the diagrams.” Inputs and outputs specified in the first diagram remain constant in all subsequent diagrams. The rest of the origi- nal diagram, however, is exploded into close-ups involving three to nine processes and showing data stores and new lower-level data flows. The effect is that of taking a magnifying glass to view the origi- nal data flow diagram. Each exploded diagram should use only a single sheet of paper. By exploding DFDs into subprocesses, the systems analyst begins to fill in the details about data movement. The han- dling of exceptions is ignored for the first two or three levels of data flow diagraming. Diagram 0 is the explosion of the context diagram and may include up to nine processes. Including more processes at this level will result in a cluttered diagram that is difficult to understand. Each pro- cess is numbered with an integer, generally starting from the upper left-hand corner of the diagram and working toward the lower right-hand corner. The major data stores of the system (representing master files) and all external entities are included on Diagram 0. Figure 38. DFD Level 0 Diagram In the DFD level 0 diagram shown, the mobile application will capture images from the client that pass in the field of the camera. Then the system will detect their faces. The application has the ability to store the captured images locally into their private database and also
  • 46. 46 can send their images to the system that can be distributed across a large scale to allow different users to access it. In the website, it is connected to a database in which all users uploads and captured images are stored in it. 3.4 Entity Relationship Diagram (ERD) An entity-relationship diagram (ERD) is a graphical representation of an information system that shows the relationship between people, objects, places, concepts or events within that system. An ERD is a data modeling technique that can help define business processes and can be used as the foundation for relational database. Figure 39. ERD Diagram
  • 47. 47 3.5 UML 3.5.1 Use Case Diagram Use case diagrams are considered for high level requirement analysis of a system. So when the re- quirements of a system are analyzed the functionalities are captured in use cases. This diagram is a graphic depiction of the interactions among the elements of a system. A use case is a methodology used in system analysis to identify, clarify, and organize system requirements. Use case diagrams are drawn to capture the functional requirements of a system. Figure 40. UML Use Case Diagram As shown in this figure, the system provides a number of functions such as  Captruing images  Storing images in database  Processing images  Find information about users and others The client can interact with the system by different means. It can be the captured face that the system
  • 48. 48 will process or it can be the provider of a different face image to be stored in the system database. It al- so provide information about uploaded images. The client can also search the database of the system for specific images that is distributed across the website. Camera can capture images and provide information about its location. 3.5.2 Class Diagram The class diagram is a static diagram. It represents the static view of an application. Class diagram is not only used for visualizing, describing and documenting different aspects of a system but also for constructing executable code of the software application. The class diagram describes the attributes and operations of a class and also the constraints imposed on the system. The class diagrams are widely used in the modeling of object oriented systems because they are the only UML diagrams which can be mapped directly with object oriented languages. The class diagram shows a collection of classes, interfaces, associations, collaborations and constraints. It is also known as a structural diagram. The purpose of the class diagram is to model the static view of an application. The class diagrams are the only diagrams which can be directly mapped with object oriented languages and thus widely used at the time of construction. The UML diagrams like activity diagram, sequence diagram can only give the sequence flow of the ap- plication but class diagram is a bit different. So it is the most popular UML diagram in the coder com- munity.
  • 49. 49 Figure 41. UML Class Diagram 3.5.3 Sequence Diagram UML sequence diagrams are used to show how objects interact in a given situation. An important char- acteristic of a sequence diagram is that time passes from top to bottom : the interaction starts near the top of the diagram and ends at the bottom (i.e. Lower equals Later). A popular use for them is to document the dynamics in an object-oriented system. For each key collabo- ration, diagrams are created that show how objects interact in various representative scenarios for that collaboration. The sequence diagram is used primarily to show the interactions between objects in the sequential order that those interactions occur. Much like the class diagram, developers typically think sequence diagrams were meant exclusively for them. However, an organization's business staff can find sequence diagrams useful to communicate how the business currently works by showing how various business objects in- teract. Besides documenting an organization's current affairs, a business-level sequence diagram can be used as a requirements document to communicate requirements for a future system implementation. During the requirements phase of a project, analysts can take use cases to the next level by providing a more formal level of refinement. When that occurs, use cases are often refined into one or more se- quence diagrams.
  • 50. 50 Figure 42. UML Sequence Diagram In the sequence diagram presented, the camera will capture images fro the users. The camera is always alive to capture users face images` at any time. Once the camera captured an image, it will be sent to the system for processing that follows all steps in the system overview. At first there is a preprocessing step using Retinex algorithm that adjust image brightness. Then skin pixels are detected from the images and based on skin face regions are extracted. Then if there is any region that is likely to be a face is detected, it will be sent to the feature extraction step for further processing. If there are features in the regions, then further processing by enhancing de- tected features and measuring distances will take place. If no feature extracted, then it will go back to select another region. 3.6 System Overview A simple diagram showing the steps followed when accepting a new image to the end of the system is shown in the following figure:
  • 51. 51 Figure 43. System Flow Diagram At first an image contains a face is considered, then some enhancements to it is done by simple prepro- cessing mechanisms. Because the aim is to detect skin and other regions in which any color effect will affect results, making sure that colors will be relatively constant under varying illumination conditions is important. One of the preprocessing techniques used is the Retinex algorithm [5]. Retinex algorithm was originally proposed by Land and McCann in 1971 which is one of the algo- rithms that can make enhancements to images that suffer from poor lightning and changing illumination conditions. Mainly, it consists of two major steps: estimation and normalization of illumination. It overcomes dis- crepancy between what physically natural human eye see and what camera collect under certain condi- tion changes as what happens with retina of human eyeball, so it is called Retinex. Retinex ,which belongs to the center surround class, or any color constancy algorithms modify each in- put RGB pixel value where the output value is determined by the input which is the center and its sur- rounding neighbors to reach the most sharp color of pixels aiming to remove the effect of colors, noise, nearby objects, contrast, illumination changes, or any other effects. Center is defined as each RGB pixel value and surround is a Gaussian function. Illumination can be eliminated by using some Gaussian masks to smooth the original image. Compared to histogram equalization (HE), it gives more better results. This ensures finding most of skin pixels if already exist in many cases. One of these cases is loss of data that heavily change skin color that can be caused by dividing the image by a constant. The following figure shows an example after applying this algorithm. After dividing the image at first by 5, 10, and 15 as shown in (b), then applying the algorithm over it the result is shown in (c).
  • 52. 52 (a) b(1) c(1) b(2) c(2) b(3) c(3) Figure 44. Retinex Algorithm Results. Original RGB image (a), image after division by 5, 10, and 15 (b), and result of Retinex algorithm (c) From results, Retinex algorithm proves that it works well under loss of data which has the ability to re- store data. After that, skin detector is applied. This detector is regarded as classifier that classifies all pixels into two categories : 1. Skin 2. Not skin So it set all skin like pixels to 1 and all other pixels to 0 and results in a binary image as shown in the next figure. (a) (b) (c) Figure 45. Skin Detection Example. Original RGB image (a) , detected skin (b), and skin after morphological operations (c) Glasses, brows and others can affect result of face detection after successfully detecting skin as shown in (b).To sharpen results and remove small objects that cannot be a face compared to others that affect
  • 53. 53 the result, morphological operations are used for this task. This is shown in (c). Basic two operations are erosion and dilation. Erosion is used to cut off boundaries of objects of foreground in binary image so areas of background will get smaller and shrink in size and holes within these objects get larger. Dilation is the reverse of erosion. Other two operations based on dilation and erosion are opening and closing. Finally a combination of these morphological operations are used. Other effects such as regarding neck or shoulders as part of the face will be eliminated by applying the face feature detector. This detector will check components of face region such as nose and mouth, and any object does not contain these components will be eliminated. After detection these components, a measurement of distances between them is done to create a feature vector. This is the result of the system used to be compared with other vector from other faces to check if the 2 faces are identical or not to be part of a recognition system. 3.7 System Phases This section will carry out the description of the major phases involved in the system development. 3.7.1 Image Capture In this step, an image is captured that contains a face using a simple camera. The image format can be any of the available formats such as:  Joint Photographic Experts Group (JPEG)  Portable Network Graphics (PNG)  Tagged Image File Format (TIFF)  Boyevaya Mashina Pekhoty (BMP)  Portable Pixmap (PPM)  Portable Graymap (PGM)  Portable Bitmap (PBM) Examples of images that the system can work with is shown in the next figure: Figure 46. Examples of input image to the system This captured image is the input to the system that will be further processed. 3.7.2 Image Preprocessing Image preprocessing, also called image restoration, can significantly increase the reliability of an opti- cal inspection. Several filter operations which intensify or reduce certain image details enable an easier or faster evaluation. It involves the correction of distortion, degradation, and noise introduced during
  • 54. 54 the imaging process. Preprocessing images commonly involves removing low-frequency background noise, normalizing the intensity of the individual particles images, removing reflections, and masking portions of images. Image preprocessing is the technique of enhancing data images prior to computa- tional processing. Four categories of image preprocessing methods according to the size of the pixel neighborhood that is used for the calculation of a new pixel brightness: 1. pixel brightness transformations 2. geometric transformations, 3. preprocessing methods that use a local neighborhood of the processed pixel 4. image restoration that requires knowledge about the entire image. Image preprocessing methods use the considerable redundancy in images. Neighboring pixels corre- sponding to one object in real images have essentially the same or similar brightness value. Thus, dis- torted pixel can often be restored as an average value of neighboring pixels. If reprocessing aims to correct some degradation in the image, the nature of a priori information is im- portant. Knowledge about the nature of the degradation; only very general properties of the degradation are assumed. Knowledge about the properties of the image acquisition device, and conditions under which the image was obtained. The nature of noise (usually its spectral characteristics) is sometimes known. Knowledge about objects that are searched for in the image, which may simplify the prepro- cessing very considerably. If knowledge about objects is not available in advance it can be estimated during the processing. Illumination variation is the most annoying effect that needs to be eliminated from the input image. Next figure shown an example of illumination effect. Figure 47. Illumination effect It has has enormously complex effects on the image of an object. In the image of a familiar face, changing the direction of illumination leads to shifts in the location and shape of shadows, changes in highlights, and reversal of contrast gradients. Yet every-day experience shows that humans are remarkably good at recognizing faces despite such variations in lighting. Here it can be examined how humans recognize faces, given image variations caused by changes in lighting direction and by cast shadows. One issue is whether faces are represented in an illumination-invariant or illumination- dependent manner. A second issue is whether cast shadows improve face recognition by providing in- formation about surface shape and illumination direction, or hinder performance by introducing spuri- ous edges that must be discounted prior to recognition. The influences of illumination direction and cast shadows are examined using both short-term and long-term memory paradigms. Images of the same
  • 55. 55 face appear differently due to the change in lighting. If the change induced by illumination is larger than the difference between individuals, systems would not be able to recognize the input image. There are many ways that preprocessing can be applied:  Normalization  Filters  Soft focus, selective focus  User-specific filter  Static/dynamic binarisation  Image plane separation  Binning One of the techniques used is histogram equalization. Histogram equalization is a technique for adjust- ing image intensities to enhance contrast. It is not necessary that contrast will always be increase in this. There may be some cases were histogram equalization can be worse. In that cases the contrast is de- creased. It provide a sophisticated method for modifying the dynamic range and contrast of an image by altering that image such that its intensity histogram has a desired shape. Unlike contrast stretching, histogram modeling operators may employ non-linear and non-monotonic transfer functions to map between pixel intensity values in the input and output images. Histogram equalization employs a monotonic, non- linear mapping which reassigns the intensity values of pixels in the input image such that the output image contains a uniform distribution of intensities (i.e. a flat histogram). This technique is used in im- age comparison processes (because it is effective in detail enhancement) and in the correction of non- linear effects introduced by, say, a digitizer or display system. Equalization implies mapping one distribution (the given histogram) to another distribution (a wider and more uniform distribution of intensity values) so the intensity values are spread over the whole range. Example of how HE work is shown in the following figure. Figure 48. Before and after applying histogram equalization over an image
  • 56. 56 Another algorithm that can be used is called RETINEX that has greater efficiency that histogram equal- ization in action and time. RETINEX: 're-tin-ex, 'ret-nex; noun; (pl) retinexes; from Medieval Latin retina and Latin cortic. Edwin Land coined word for his model of human color vision, combining the retina of the eye and the cerebral cortex of the brain. More specifically defined in image processing as a process that automatically pro- vides visual realism to images. It is one of the color constancy enhancement algorithms uses the Fast Fourier Transform. It has the abil- ity to determine the colors of objects irrespective of the illumination conditions and of the nearby ob- jects color. This is an important characteristic of the Human Visual System (HVS). The HVS is able to compute some descriptors which defines the object color independently of the present illumination in the scene and independently of the surrounding objects color. The goal of color constancy research is to achieve these descriptors, which means discounting the effect of illumination and obtaining a canonical color appearance. The basic Retinex model is based on the assumption that the HVS operates with three retinal-cortical systems, each one processing independently the low, middle and high frequencies of the visible elec- tromagnetic spectrum. Each system produces one lightness value which determines, by superposition, the perception of color in the HVS. On digital RGB images, the lightness is represented by the triplet (Lr , Lg , Lb ) of light- ness values in the three chromatic channels. Edges are the main source of information to achieve color constancy. Moreover, they realized that the procedure of taking the ratio between two adjacent points can both detect an edge and eliminate the ef- fect of non-uniform illumination. Example of the results obtained after applying Retinex algorithm over a destroyed color image is shown in next figure. Figure 49. Applying RETINEX over a degraded color images
  • 57. 57 3.7.3 Skin and Face Detection Skin detection is the process of finding skin-colored pixels and regions in an image or a video. This process is typically used as a preprocessing step to find regions that potentially have human faces and limbs in images. Several computer vision approaches have been developed for skin detection. A skin detector typically transforms a given pixel into an appropriate color space and then use a skin classifier to label the pixel whether it is a skin or a non-skin pixel. A skin classifier defines a decision boundary of the skin color class in the color space based on a training database of skin-colored pixels. Detecting skin-colored pixels, although seems a straightforward easy task, has proven quite challenging for many reasons. The appearance of skin in an image depends on the illumination conditions (illumina- tion geometry and color) where the image was captured. Humans are very good at identifying object colors in a wide range of illuminations, this is called color constancy. Color constancy is a mystery of perception. Therefore, an important challenge in skin detection is to represent the color in a way that is invariant or at least insensitive to changes in illumination. This is why Retinex is used. The choice of the color space affects greatly the performance of any skin detector and its sensitivity to change in illumination conditions. Another challenge comes from the fact that many objects in the real world might have skin-tone colors. For example, wood, leather, skin-colored clothing, hair, sand, etc. This causes any skin detector to have many false detections in the background if the environment is not controlled. In any given color space, skin color occupies a part of such a space, which might be a compact or large region in the space. Such region is usually called the skin color cluster. A skin classifier is a one-class or two-class classification problem. A given pixel is classified and labeled whether it is a skin or a non- skin given a model of the skin color cluster in a given color space. In the context of skin classification, true positives are skin pixels that the classifier correctly labels as skin. True negatives are non-skin pix- els that the classifier correctly labels as non-skin. Any classifier makes errors: it can wrongly label a non-skin pixel as skin or a skin pixel as a non-skin. The former type of errors is referred to as false pos- itives (false detections) while the later is false negatives. A good classifier should have low false posi- tive and false negative rates. As in any classification problem, there is a trade off between false posi- tives and false negatives. The more loose the class boundary, the less the false negatives and the more the false positives. The tighter the class boundary, the more the false negatives and the less the false positives. The same applies to skin detection. This makes the choice of the color space extremely im- portant in skin detection. The color needs to be represented in a color space where the skin class is most compact in order to be able to tightly model the skin class. The choice of the color space directly affects the kind of classifier that should be used. 3.7.3.1 Color Space Selection The human skin color has a restricted range of hues and is not deeply saturated, since the appearance of skin is formed by a combination of blood (red) and melanin (brown, yellow). Therefore, the human skin color does not fall randomly in a given color space, but clustered at a small area in the color space. But it is not the same for all the color spaces. Next figure shows density plots for skin-colored pixels obtained from images of different Asian people plotted in different color spaces. It is found that the same skin color is located differently in different color spaces.
  • 58. 58 Figure 50. Density plots of Asian skin in different color spaces Also next figure shows density plots for skin-colored pixels from different people from different races: Asian, African and Caucasian plotted in different color spaces. Figure 51. Density plots of Asian, African and Caucasian skin in different color spaces Variety of color spaces have been used in skin detection literature with the aim of finding a color space where the skin color is invariant to illumination conditions. The choice of the color spaces affects the
  • 59. 59 shape of the skin class, which affects the detection process. Some color spaces have their luminance component separated from the chromatic component, and they are known to possess higher discriminality between skin pixels and non-skin pixels over various illumination conditions. Skin color models that operate only on chrominance subspac- es such as the Cb-Cr, and H-S have been found to be effective in characterizing various human skin colors. Skin classification can be accomplished by explicitly modeling the skin distribution on certain color spaces using parametric decision rules. Some researchers made a set of rules to de- scribe skin cluster in RGB space while others used a set of bounding rules to classify skin regions on both YCbCr and HSV spaces. A variety of classification techniques have been used in the literature for the task of skin classification. A skin classifier is a one-class classifier that defines a decision boundary of the skin color class in a fea- ture space. The feature space in the context of skin detection is simply the color space chosen. Any pix- el which color falls inside the skin color class boundary is labeled as skin. Therefore, the choice of the skin classifier is directly induced by the shape of the skin class in the color space chosen by a skin de- tector. The more compact and regularly shaped the skin color class, the more simple the classifier. To enable greater flexibility in the detection of the skin color, not only one color space is used but a combination of color spaces. That is RGB, HSV, and YCbCr color spaces. 3.7.3.2 RGB-H-CbCr Color Space [6] While the RGB, HSV and YUV (YCbCr) are standard models used in various color imaging applications, not all of their information are necessary to classify skin color. This model utilizes the additional hue and chrominance information of the image on top of stand- ard RGB properties to improve the discriminality between skin pixels and non-skin pixels. Skin regions are classified using the RGB boundary rules introduced by Peer et al. in [7] and also ad- ditional new rules for the H and CbCr subspaces. These rules are constructed based on the skin color distribution obtained from the training images. The classification of the extracted regions is fur- ther refined using a parallel combination of morphological operations. The next figure shows the steps used from detecting skin color to detecting faces in the image. Figure 52. System overview for face detecting using skin color In this color-based approach to face detection, prior formulation of the proposed RGB-H-CbCr skin model is done using a set of skin-cropped training images. Three commonly known color spaces – RGB, HSV and YCbCr are used to construct the proposed hybrid model. Bounding planes or
  • 60. 60 rules for each skin color subspace are constructed from their respective skin color distributions. In the first step of the detection stage, these bounding rules are used to segment the skin regions of input test images. After that, a combination of morphological operations are applied to the ex- tracted skin regions to eliminate possible non-face skin regions. Finally, the last step labels all the face regions in the image and returns them as detected faces. In this system, there is a preprocessing step that is applying Retinex algorithm. 3.7.3.3 Skin Color Subspace Analysis In RGB space, the skin color region is not well distinguished in all 3 channels. A simple observa- tion of its histogram will show that it is uniformly spread across a large spectrum of values. In HSV space, the H (Hue) channel shows significant discrimination of skin color regions, as observed from the H-V and H-S plots in the next figure. Figure 53. H-V and H-S subspace plots Figure 54. Distribution of the H (Hue) channel Both plots exhibited very similar distribution of pixels. In the hue channel shown in next figure, most of the skin color samples are concentrated around values between 0 and 0.1 and between 0.9 and 1.0 (in a normalized scale of 0 to 1). Some studies have indicated that pixels belonging to skin regions possess similar chrominance (Cb and Cr) values. These values have also been shown to provide good coverage of all human races. The Cb- Cr subspace offers the best discrimination between skin and non-skin regions. Next figure shows the compact distribution of the chrominance values (Cb and Cr) in comparison with the luminance value (Y). It is also observed that the varying intensity values of Y (Luminance) does not alter the skin color distribution in the Cb-Cr subspace. The luminance property merely characterizes the brightness of a particular chrominance value.
  • 61. 61 Figure 55. Distribution of Y, Cb and Cr respectively 3.7.3.4 Skin Color Bounding Rules From the skin color subspace analysis, a set of bounding rules is derived from all three color spac- es, RGB, YCbCr and HSV. All rules are derived for intensity values between 0 and 255. In RGB space, skin color rules intro- duced by Peer et al. [8] can be used. The skin color at uniform daylight illumination rule is de- fined as: while the skin color under flashlight or daylight lateral illumination rule is given by Both rules are combine by a logical OR to enable detecting of both daylight and night skin colors. Based on the observation that the Cb-Cr subspace is a strong discriminant of skin color. The five bounding rules that enclosed the Cb-Cr skin color region are formulated as below: In the HSV space, the hue values exhibit the most noticeable separation between skin and non-skin regions. Two cut off levels are estimated as our H subspace skin boundaries,
  • 62. 62 Rules for RGB, YCbCr and HSV are combined by a logical AND to detect skin. This creates the range of skin in the combined color spaces. 3.7.3.5 Morphological Operations Up to this step, skin was detected efficiently. The next step of the face detection system involves the use of morphological operations to refine the skin regions extracted. The sub-regions can be easily grouped together by applying simple dilation on the large regions. Hole and gaps within each region can also be closed by a flood fill operation. The problem of oc- clusion often occurs in the detection of faces in large groups of people. Even faces of close proximi- ty may result in the detection of one single region due to the nature of pixel-based methods. Hence, morphological opening is used to “open up” or pull apart narrow, connected regions. Next figure shows an example: Figure 56. Detected skin after morphological operations 3.7.3.6 Skin Detection Results This section is regarded a comparative section between the listed color spaces to compare their accura- cy in detecting the skin color.