SlideShare a Scribd company logo
1 of 9
Download to read offline
SPECIAL SECTION ON AI-DRIVEN BIG DATA PROCESSING:
THEORY, METHODOLOGY, AND APPLICATIONS
Received February 3, 2019, accepted February 13, 2019, date of publication February 28, 2019, date of current version March 18, 2019.
Digital Object Identifier 10.1109/ACCESS.2019.2902133
Graph Fusion for Finger Multimodal Biometrics
HAIGANG ZHANG 1, SHUYI LI1, YIHUA SHI1, AND JINFENG YANG1,2
1Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China
2Shenzhen Polytechnic, Shenzhen 518055, China
Corresponding author: Jinfeng Yang (jfyang@cauc.edu.cn)
This work was supported by the National Natural Science Foundation of China under Grant 61806208.
ABSTRACT In terms of biometrics, a human finger itself is with trimodal traits including fingerprint,
finger-vein, and finger-knuckle-print, which provides convenience and practicality for finger trimodal fusion
recognition. The scale inconsistency of finger trimodal images is an important reason affecting effective
fusion. It is therefore very important to developing a theory of giving a unified expression of finger trimodal
features. In this paper, a graph-based feature extraction method for finger biometric images is proposed. The
feature expression based on graph structure can well solve the problem of feature space mismatch for the
finger three modalities. We provide two fusion frameworks to integrate the finger trimodal graph features
together, the serial fusion and coding fusion. The research results can not only promote the advancement
of finger multimodal biometrics technology but also provide a scientific solution framework for other
multimodal feature fusion problems. The experimental results show that the proposed graph fusion recogni-
tion approach obtains a better and more effective recognition performance in finger biometrics.
INDEX TERMS Finger biometrics, multimodal feature, fusion recognition, graph theory.
I. INTRODUCTION
Biometrics is a type of technology which allows a person to
be identified and authenticated according to a set of recog-
nizable and specific data [1]. Due to the high user acceptance
and convenience, the biometric authentication is taking over
the traditional recognition technologies. With the increasing
demand of accurate human authentication, however, using
only unimodal biometric feature for person identification
often fails to achieve satisfactory results [2]. In contrast,
multimodal fusion biometrics technology, by integrating uni-
modal feature information, usually behaves better in terms of
universality, accuracy and security [3], [4].
Fingers contain rich biometric information, and the
finger-based biometrics technology has been developing
for a long time [5], [6]. There are three biometric patterns
dwelling in a finger, fingerprint(FP), finger-vein(FV) and
finger-knuckle-print(FKP). With convenient collection and
relatively compact distribution of trimodal sources of a finger,
finger-based multimodal biometric fusion recognition has
more universal and practical advantages [7], [8]. Because of
the mismatch of finger trimodal feature spaces, to explore a
The associate editor coordinating the review of this manuscript and
approving it for publication was Zhanyu Ma.
reliable and effective finger multimodal features expression
and fusion recognition approach, however, has still been a
challenging problem in practice [9], [10].
There are four levels of fusion in a multimodal biometric
system, pixel level [11], feature level [12], [13], matching
score level [14] and decision level [15]. The fusion at the
pixel level, also called as sensor fusion, refers to direct fusion
of multi-modal images acquired by sensors. Since large dif-
ferences exist between different modal images in content,
noise, etc., it is unstable and infeasible to make fusion for the
trimodal biometrics of fingers at pixel level [16]. The fusion
at matching level and decision level are respectively based
on the matching results and the recognition results of the
single-modal biometric features. Although these strategies
are easy to implement on the system, there are many inher-
ent defects, such as strong algorithm dependence, low effi-
ciency and limited recognition performance. More critically,
the fusion at matching layer or decision layer ignores the
discriminating power of multimodal biometrics [17], [18].
Therefore, such fusion strategy is not optimal in nature. The
fusion at feature level aims to extract more accurate feature
information by performing a series of processing on the
biometric images. Compared with other fusion strategies,
the fusion at feature level can improve the discriminating
VOLUME 7, 2019
2169-3536 
 2019 IEEE. Translations and content mining are permitted for academic research only.
Personal use is also permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
28607
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
ability of fusion features, and presents much more stable and
effective fusion performance [4], [9].
Recently, some relative research works about the fusion of
finger biometrics have been reported. Alajlan et al. [14] pro-
posed a biometric identification method based on the fusion
of FP and heartbeat at the decision level. Reference [19]
introduced a biometric system that integrates face and FP.
Evangelin and Fred [20] proposed the fusion identification
frame of FP, palm print and FKP on feature level. In 2016,
Khellat-Kihel et al. [12] proposed a recognition framework
for the fusion of trimodal biometrics on the finger on
the feature and decision level. Reference [21] developed a
magnitude-preserved competitive coding feature extraction
method for FV and FKP images, and obtained robust recog-
nition performance after fusion of two biometric features.
In addition, some feature extraction methods can promote
efficient fusion of finger trimodal images on the feature layer,
such as GOM method [39] and MRRID method [38]. It is
worth mentioning that in our previous work, we discussed
the granulation fusion method of finger trimodal biological
features, and solved the basic theoretical problems of fusion
by using particle computing as a methodology [13], [22]. The
existing fusion algorithms have some shortcomings:
♦ Unreasonable image scale consistency. Finger trimodal
scale inconsistency is still an important factor affecting effec-
tive fusion. The straightforward scale normalization is easy to
destroy image structure information, and stable feature fusion
cannot be achieved.
♦ Large storage space and long recognition time. Practi-
cality is a factor that should be considered in finger trimodal
fusion recognition.
The scale inconsistency or incompatibility of finger tri-
modal images is the most critical problem that causes fusion
difficulties. It is very important to develop a theory of giv-
ing a unified expression of the trimodal features of the
finger [23]. The rapid development of graph-based image
analysis technology motivates us extracting graph structure
features for the finger trimodalities. The related research of
graph theory began in 1736, which is an important branch
of mathematics [24], [25]. Graphs are often used to describe
a certain connection between objects, or a specific relation-
ship between various parts of a corresponding object. As
an image description method, the graph can well describe
the various structural relationships in the image. In the
past decades, many graph-based methods have been pro-
posed for dealing with pattern recognition problems [26].
Luo et al. [27] employed the graph embedding method
for 3D image clustering. Some graph-based coding oper-
ators have been well applied in feature coding, like LGS
and WLGS operators [28], [29]. Reference [30] overviews
the recent graph spectral techniques in graph signal pro-
cessing (GSP) specifically for image and video processing,
including image compression, image restoration, image fil-
tering and image segmentation.
In this paper, we explore the description of finger trimodal
features based on graph theory, and propose the reasonable
fusion framework. First, a directional weighted graph is
established for each modality of finger, where a block-wise
operation is done to generated the vertices. The Steerable fil-
ter is applied to extract the oriented energy distribution (OED)
of each block, and the weighted function of vertexes and
edges of graph is calculated. Based on different block par-
titioning strategies, the trimodal graph features can solve
the problem of dimensional incompatibility, which lays the
foundation for trimodal fusion. Then we propose two finger
trimodal feature fusion frameworks named serial and cod-
ing fusions, which can integrate trimodal features of finger
together. The serial fusion refers to splicing the graph features
of finger three modalities, while in coding fusion framework,
the competitive coding way is applied to combine the three
graph features to a comprehensive feature. In addition, any
famous classifies connected after the fusion feature can be
used for pattern recognition.
In order to demonstrate the validity of the proposed fusion
recognition frameworks for finger trimodal biometrics, we
design a finger trimodal images acquisition device simulta-
neously, and construct a homemade finger trimodal database.
Simulation results have demonstrated the effective and sat-
isfied performance of the proposed finger biometric fusion
recognition frameworks.
II. TRIMODAL IMAGE ACQUISITION
To obtain FP, FV and FKP images, we have designed a
homemade finger trimodal imaging system [31], as is shown
in Fig. 1. The proposed imaging system, which can simul-
taneously collect FP, FV and FKP images, integrates modal
acquisition, image storage, image processing and recognition
modules. Then we constructed a finger trimodal database.
FIGURE 1. The schematic of finger trimodal imaging device.
In order to reduce the finger pose variations, a fixed col-
lection groove is designed in the image acquisition device.
It can avoid the image distortion problems due to finger offset
and rotation as much as possible. In the proposed imaging
system, the collection of FP image is relatively simple, which
can be obtained through the FP collector at the bottom.
For FV modality imaging, the used array luminaire is made
of near infrared (NIR) light-emitting diodes (LEDs) with a
wavelength of 850nm. The light source which can penetrate
a finger is on the palm side of the finger, while the imaging
lens is on the back side of the finger. Based on the principle of
28608 VOLUME 7, 2019
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
reflection of visible light source on the skin surface on finger,
the FKP modality is imaged by reflected lights.
By a software interface, as shown in Fig. 2, the finger
trimodal image acquisition task can be implemented conve-
niently. From the left of Fig. 2, we can see that the cap-
tured image contains not only the useful region but also
some uninformative parts. Hence, the region of interest (ROI)
localization for the original trimodal images is critical. The
ROI localization strategy of three modalities is presented
in the appendix. In addition, the original images are often
blurred due to the influence of light and noise. The image
enhancement can highlight more texture information for fin-
ger three modality images, which is very helpful to improve
recognition accuracy. The image enhancement method and
some results are presented in Sec. III during the introduction
of graph establishment.
FIGURE 2. A software interface of imaging acquisition.
III. GRAPH FEATURE EXTRACTION AND
FUSION FRAMEWORK
In this section, we establish the weighted graph structure
model to characterize the finger biometrics, and present
the fusion frameworks for the trimodal images of finger.
We divide the original modal image into many blocks, and
the Steerable filters with arbitrary orientations is applied
to describe the randomness energy consisting in the local
blocks. Based on the graph features, we design two fusion
frameworks named serial fusion and coding fusion, which
can fully integrate trimodal information and improve the
recognition accuracy and robustness.
A. GRAPH FEATURE EXTRACTION
The weighted graph for the finger biometric image can be rep-
resented as G (V, E, W), where V represents the vertex set,
E represents the edge set, and W are the weights of the edges
representing the relationships among the vertexes [32], [42].
There are usually two main ways to select the vertexes of
the graph. One can treat the pixels as the vertices of the graph.
However, it will result in a larger graph and a larger amount
of computation, when the size of image is relatively large. For
another way, the locations and types of minutiae, such as the
crossovers, bifurcations and terminations, can be selected as
the vertices of the graph. The selection of minutiae points is
often heavily influenced by humans and lack of robustness
when image quality is poor.
Here for the graph establishment, we divide the image into
blocks and select the features of blocks as the vertices of the
graph, as shown in Fig. 3. For an image with size of M × N,
we design a sliding window with size of h × w, which would
traverse the entire image. The striding steps along the right
and down directions are recorded as r and d respectively.
Then one can get the number of vertices of the graph as
Num = round
M−h + r
r
+ 1
× round
N−w + d
d
+ 1 (1)
where round (·) is the rounding operator, which means if
M−h+r
r is not an integer, round it up to the nearest integer.
FIGURE 3. Block division for the selecting of vertices.
FIGURE 4. Graph establishment of finger biometrics.
For one finger biometric image, the local region has high
correlation with its adjacent region, while the correlation
is relatively low for the two regions with relatively large
distance. Therefore, we take only the relationships between
the corresponding vertex with the surrounding vertices of
specific directions, as shown in Fig. 4. It is worth mentioning
that compared with the undirected graph, directed graph is
better in describing the local relations of an image. Based
on the triangulation algorithm, one would establish the edges
between the target vertex with the ones along its down, right
and bottom right directions. The weighted function is used to
describe the connotation expression between vertices, which
can attach the details of image for the graph through deter-
mine the appropriate calculation method for the weighted
function. We define the weighted function as
w Vi, Vj|eij ∈ E = W (Vi) · S Vi, Vj (2)
where W (Vi) represents the comprehensive feature in the
block containing vertex (Vi), while S Vi, Vj is the similarity
degree of two block features.
VOLUME 7, 2019 28609
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
In order to reliably strength the finger texture information,
finger three modalities images need to be enhanced effec-
tively. Here, a bank of Gabor filters with 8 orientations [33]
and Weberąŕs Law Descriptor (WLD) [34] are combined
for images enhancement and light attenuation elimination.
In order to extract reliable finger skeletons, a multi-threshold
method [35] is used to obtain the binary results of the
enhanced finger image. A thinning algorithm proposed
in [36] is used to extract the finger skeletons of the binary
images, some results are shown in Fig. 5(a)(b)(c).
FIGURE 5. The results during skeleton extraction. (a) An original image.
(b) The enhanced result. (c) The skeleton of binary image. (d) A block
of (c). (e) Oriented energy.
The value of the vertex represents the comprehensive fea-
ture in the corresponding block region, while the weighted
function of two connected vertices represents their similarity.
We make the selection of vertex features and the calculation
of weighted functions using the oriented energy distribution
feature based on Steerable filter.
The Steerable filter has good performance for the analysis
of the curve structure of the image. By integrating a set of
base filter banks in different directions, the image can be
convolved in any direction to extract the image texture in
the corresponding direction. The Oriented Energy Distribu-
tion (OED) represents the convolution results of the Steerable
filter and the image region in different directions. The OED
feature is calculated as
hθ
(x, y) =
K
j=1
kj (θ) hj (x, y) (3)
E (θ) =


K
j=1
M
x=1
N
y=1
hθ
(x, y) I (x, y)


2
(4)
OED = {E (1), E (2), · · ·, E (θ)} (5)
where hθ (x, y) represents the Steerable filter in the direction
of θ.
K
j=1
hj (x, y) is a set of base filter banks and K is the total
number base filters. kj (θ) is the interpolation function, which
is only related with θ. I (x, y) represents the corresponding
image region. E (θ) is the convolution energy in the direction
of θ. Fig. 5(e) illustrates an obtained energy map correspond-
ing to a block located at Fig. 5(d).
The maximum energy feature is selected as the vertex
weight, while the similarity of OEDs of two connected ver-
tices is treated as the weighted function. Such that
V (I) = max (OED|I) (6)
S Vi, Vj = exp −
θ
k
Ei (k) − Ej (k)
2
2σ2
(7)
The adjacency matrix can be used to describe graph struc-
ture of finger modalities, where the elements in the adjacency
matrix represents the weighted functions. Then any classifiers
can make pattern recognition based on the adjacency matrix
expression of finger modalities.
B. FEATURE FUSION
In the above subsection, we present a novel feature extraction
frame based on the graph structure for the finger biometric
images. Following shows two fusion frameworks for three
modalities of finger.
1) SERIAL FUSION STRATEGY
Considering a weighted graph with Num vertices, the adja-
cency matrix (abbreviated to M) is a Num × Num matrix,
of which the (i, j) term is the weighted function of the ith
and jth vertex, such that
M (i, j) =
S Vi, Vj , if eij ∈ E
0, otherwise
(8)
Because we only take the relationship between adjacent
vertices into consideration when establishing feature graph,
most of the elements in the adjacency matrix are zero. There-
fore, the adjacency matrix is a sparse matrix. For the sparse
adjacency matrix, there is no need to save the entire matrix
elements. The compressed sparse row (CSR, also called com-
pressed row storage (CRS)) format is an effective expression
for the sparse matrix, which contains three (one-dimensional)
arrays, representing nonzero values, row indices, and column
indices respectively. Fig. 6 presents a simple example for
the CSR expression of sparse matrix. Since we have set the
rules for graph establishment for finger biometrics image,
the position of the non-zero elements are determined in the
adjacency matrix. In the CSR expression, we only need to
retain array A and ignore I and J.
FIGURE 6. CSR expression of sparse matrix.
28610 VOLUME 7, 2019
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
The serial fusion strategy refers to the tandem splicing of
trimodal CSR vectors and gets a fused vector. Fig. 7 presents
the serial fusion framework. Ap, Av and Akp are feature vec-
tors extracted through three adjacency matrixes after graph
feature extraction. Then we apply the serial fusion strategy to
integrate three feature vectors into one feature fused vector,
such that V = Ap Av Akp . The adjacency matrix based
fusion framework is simple. Under the premise of ensur-
ing complete features, the matrix dimension is reduced and
the computational efficiency is improved. For the feature
learning and matching process, one would apply the famous
classification algorithms in machine learning fields, such as
Support Vector Machine (SVM), Extreme Learning Machine
(ELM) and Softmax classifier.
FIGURE 7. Serial fusion of the trimodal graph features of finger. (a) ROI of
trimodal images of finger. (b) Graph structure. (c) Adjacency matrixes.
(d) CSR vector expression. (e) Serial fusion.
2) CODING FUSION STRATEGY
Another fusion framework for trimodal graph features based
on competitive coding is presented. When establishing the
graph, the block segmentation operation is applied to generate
the vertex expression. By properly setting the segmentation
sizes and sliding steps, it is easy to make normalization for
the trimodal feature graphs. We would obtain three adjacency
matrixes with the same size. Then the competitive coding
fusion method is applied to combine the three adjacency
matrixes into one integrated matrix.
Fig. 8 presents the fusion strategy based on competitive
coding. For the target pixels in the same position of the
trimodal adjacency matrixes, we employ a seven-bit binary
code ([c1, c2, c3, c4, c5, c6, c7]) to represent the feature infor-
mation. Considering the (i, j)th pixels of trimodal graphs,
c4 represents the comprehensive feature of FVi,j, FPi,j and
FIGURE 8. Coding fusion of the trimodal graph features of finger.
FKPi,j, while (c3, c5), (c2, c6), (c1, c7) are the competitive
coding results of pixels at the left and right of the target pixel
for the FP, FV and FKP features respectively. For example,
if FVi,j−1 > FVi,j+1, (c3, c5) = (1, 0). It is worth mentioning
that the trimodal image pixels should be normalized before
coding fusion. A general coding way for trimodal fusion is as
follows.
(ci, c8−i) =



(1, 0), if Fi,j−1 > Fi,j+1
(0, 1), if Fi,j−1 < Fi,j+1
(1, 1), if Fi,j−1 = Fi,j+1
c4 =
1, if FVi,j ≥ FPij + FKPij /2
0, if FVi,j < FPij + FKPij /2
(9)
where F represents the trimodal biometric images (FP, FV,
FKP) and i = 1, 2, 3.
Through the proposed fusion strategy based on competitive
coding, we integrate three adjacency matrices together. For
the feature matching process, one may apply the traditional
classifier, or the similarity of two matrices (named Matrix
Matching method). The computational formula for the simi-
larity of two matrices is as follows.
Ms =
n
p=1
n
q=1
A (p, q)− ¯A B (p, q)− ¯B
n
p=1
n
q=1
A (p, q)− ¯A
2
n
p=1
n
q=1
B (p, q)− ¯B
2
(10)
where ¯A and ¯B represent the mean values of fusion adjacency
matrix A and B respectively.
The matrix matching method is easy to operate and to be
understood. However, the recognition efficiency of matrix
matching method is related to the dimension of the matrix.
When the scale of the graph increases, the dimension of the
adjacency matrix rises sharply, which will greatly increase the
matching computational burden.
IV. SIMULATION RESULTS
In this section, we present the simulation results for the
unimodal recognition and trimodal fusion recognition of fin-
gers. In our homemade finger trimodal database, there are
17550 images in total from 585 fingers. For one finger of one
category, we collect 10 FP, FV and FKP images respectively.
A. UNIMODAL RECOGNITION
The graph features of finger three modalities can be used
for identification separately. Here the simulation results of
finger unimodal recognition is presented. Before presenting
the results, we show some necessary explanations of some
common statistical variables. FAR, fault acceptance rate for
short, measures a recognition result of incorrectly identifying
an individual, while FRR is the abbreviated form fault rejec-
tion rate, which indicates the result of incorrectly denying an
individual. The receiver operating characteristic (ROC) curve
VOLUME 7, 2019 28611
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
is created by plotting the FAR against the FRR at various
threshold settings.
The size of divided block of finger modal images for
the establishment of graph is an important parameter.
Fig. 9 shows the ROC curves of different block partitioning
sizes without overlap during the sliding. The ROC curves
of trimodel features of finger show consistent changes. The
smaller block size corresponds to the better ROC curve,
as well as smaller equal error rate (EER). The size of sliding
block affects the number of vertices in the graph. The smaller
the size of the sliding block, the more encrypted the set of
graph vertices, which leads to the longer time of graph estab-
lishment. Fig. 10 presents the time of graph establishment of
trimodal features of finger based on different sliding block
sizes. with the increase of the size of sliding block, the time of
graph establishment of trimodal features of finger decreases.
For the finger unimodal biometric recognition, one can
link any classifier after the graph features. In the simulations,
we apply four discriminant algorithms for the recognition,
FIGURE 9. The ROC curve of block division of finger trimodal features.
(a) FV. (b) FKP. (c) FP.
FIGURE 10. The establishment time of graph of different sizes of block
division.
ELM [37], SVM [38], softmax [39] and matrix matching. In
ELM algorithm, we set 1000 hidden nodes with the ’Sigmoid’
activation function. Some important parameters in SVM algo-
rithm are set default values. The parameter specifying the
scaling factor in the Gaussian radial basis function kernel is
set as 1. We use the Quadratic Programming method to find
the separating hyperplane. The max iterative step is 15000,
and the ’rbf’ kernel function is applied. In softmax algorithm,
the weight decay parameter lambda is set as 10−4, while the
max iterative step is 50.
Considering the accuracy and efficiency are two important
indicators for measuring the performance of biometric algo-
rithms, we compare the recognition accuracy and recognition
time (the same as testing time) among the four classification
algorithms. In the simulation, nine images of one finger are
used for training, while the rest one for testing. Tab. 1 presents
the simulation results for recognition accuracy of FV, FKP
and FP patterns respectively based on the four classifiers.
The recognition performances of the three modalities of fin-
ger based on four classifiers are consistent. ELM algorithm
shows the best recognition performance, while the perfor-
mance of matrix matching method is not very satisfactory. As
the size of the block becomes larger, the recognition accuracy
of each algorithm decreases, which is consistent with the
results shown by ROC curves.
TABLE 1. Recognition accuracy for finger unimodal recognition.
Fig. 11 shows the testing time of four classifiers. In order to
show a more visual gap for the testing time of four classifiers
in the same figure, we made a slight modification to the posi-
tion of the curve. For example, the value of the green curve
plus 150 is the final result for the matrix matching method in
the first subfigure. Because of the random selection of hidden
nodes, the testing time of ELM algorithm is most satisfactory.
In comparison, the testing time of matrix matching presents
a gap of several hundred times.
28612 VOLUME 7, 2019
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
FIGURE 11. Testing time for the finger single-modal recognition based on
different classifiers.
B. TRIMODAL FUSION RECOGNITION
This subsection presents the recognition performance of
serial fusion and coding fusion of trimodal graph features
of finger. In the serial fusion, three graph features are
stitched together. Coding fusion makes feature informa-
tion more complex and has consistent dimensions compared
with unimodal graph feature, which is more conducive to
anti-counterfeiting and storage. Considering the classifica-
tion performance of four classifiers, ELM algorithm with
1000 hidden nodes and ’sin’ activation function is applied
for trimodal fusion recognition of finger. The recognition
accuracy is 99.9% for serial fusion and 99.8% for coding
fusion, which is approximately the same as the recognition
accuracy of unimodal graph feature.
In order to verify the stable and robust performance of the
fusion recognition frame, we added the white noise with 20 as
the mean and 10 as the variance to the trimodal features of
the finger respectively. Tab. 2 presents the simulation results
for the comparison between unimodal and trimodal fusion
recognition of finger. Due to the interference of external
noise, the recognition performances of both unimodal and
trimodal features decrease to some extent. However, the per-
formance of trimodal fusion recognition is more robust and
stable compared with that of unimodal recognition of finger.
The coding fusion shows slightly better results compared
with serial fusion. For the testing time, the trimodal fusion
recognition takes a relatively long time. However, it is within
the allowable range.
Fusion at the feature layer is the most stable and accu-
rate way of trimodal fusion in biometrics. Here we conduct
some comparison experiments between the proposed fusion
frames and two fusion strategies at the feature layer [13] [22].
In addition, three feature extraction methods, Oricode [37],
TABLE 2. Comparison results for the unimodal and trimodal biometrics.
TABLE 3. Comparison results for different fusion framework.
FIGURE 12. Comparison results of the ROC for different fusion methods.
MRRID [40] and GOM [41], are employed in finger trimodal
images, and ELM algorithm with the same learning param-
eters is applied to make recognition after feature fusion.
Table 3 presents the simulation results where we can see
the proposed fusion frames for finger trimodal biometric fea-
tures have more accurate and efficient performance compared
with other fusion methods. In construct, the performance
of the graph coding fusion frame is slightly better than the
graph series fusion frame. For the recognition time, the graph
coding fusion frame obtains the best recognition efficiency.
In addition, we present the ROC lines in Fig. 12, which
proves the superiority of the proposed fusion frames from
another point of view. In summary, the proposed graph fusion
strategies can make more effective and stable fusion and
recognition for the finger trimodal biometric images with
relatively less recognition time.
V. CONCLUSION
This paper focuses on the finger biometric multimodal fusion
recognition. The finger contains three biometric modality,
fingerprint, finger-vein and finger-knuckle-print, which all
have certain discriminative properties and are capable of
identification. In this paper, a feature extraction model based
on graph structure presentation is established, which can be
solve the problem that the finger trimodal feature space does
not match. Two fusion frameworks for finger multimodal
features are provided. The serial fusion stitches the trimodal
graph features together, while coding fusion integrates tri-
modal graph features into one comprehensive feature based
VOLUME 7, 2019 28613
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
on competitive coding. Finger trimodal fusion recognition
can improve recognition accuracy and robustness. In the
experiment, a homemade finger trimodal dataset has been
applied to demonstrate the better performance of the pro-
posed finger trimodal fusion recognition frameworks.
Future Work: Extraction the graph features for the finger
trimodal images solves the problem of scale inconsistency,
which is the key for the effective and stable fusion recogni-
tion for finger biometrics. This attempt is not only effective
for finger multimodal fusion, but also universal for other
biomodalities. The related future work is as follows:
(1) Finger biometric image enhancement is the premise of
extracting graph features. Inappropriate filter parameters or
filter structure without generalization can cause damage to
the finger biometric images. Texture breaks are very com-
mon. Thus, in the future work, we would establish the image
repainting model in order to improve graph feature accuracy.
(2) Recently, the Graph Convolutional Neural Net-
work (GCNNs) shows satisfactory results in processing graph
structure data, which motivates us to employ GCNNs to
make recognition based on graph features of finger trimodal
images. In addition, establishing a deep fusion model based
on GCNNs for finger trimodalities is also the future work.
APPENDIX
ROI LOCALIZATOIN
In the appendix, we present the ROI localization method for
the finger trimodal images. FP image collection is relatively
mature and the collection area is highly targeted. One can
directly use the real acquired image as the ROI. For FV and
FKP images, we design the corresponding ROI localization
strategy. Fig. 13 presents the main results of ROI localization
for FV and FKP images, while (a) presents the FV images,
(b) shows the outlines of FV pattern, while (c) are the FKP
images. The joint cavity of finger is filled with biological
tissue effusion, where the light transmittance is higher than
FIGURE 13. ROI localization for the FV and FKP images.
that through the surrounding tissue. We apply the difference
in light transmission to conduct ROI localization of the FV
image. Due to the finger blocking the light-transmitting plate,
it is easy to localize the left and right outline of the finger.
After the determination of the outlines of FV image, we make
rotation for the image to keep correct direction. Additionally,
ROI would also be acquired by the outlines of the image.
The FKP images have no obvious characteristic marks,
resulting that it is difficult to locate the ROI in the vertical
direction. According to the biological structure of the finger,
the ROI of FKP locates on the back side of proximal joint.
During the ROI localization of FV, we have obtained the
position of the proximal joint region. Through spatial map-
ping, the position of the proximal joint can be mapped to the
knuckle imaging spece from the FV imaging space.
REFERENCES
[1] A. K. Jain, A. Ross, and S. Prabhakar, ‘‘An introduction to biometric
recognition,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1,
pp. 1–29, 2004.
[2] Q. Zhao, L. Zhang, D. Zhang, N. Luo, and J. Bao, ‘‘Adaptive pore model
for fingerprint pore extraction,’’ in Proc. 19th Int. Conf. Pattern Recognit.,
Dec. 2008, pp. 1–4.
[3] J. Yang and Y. Shi, ‘‘Finger–vein ROI localization and vein ridge enhance-
ment,’’ Pattern Recognit. Lett., vol. 33, no. 12, pp. 1569–1579, 2012.
[4] Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo,
‘‘Variational Bayesian matrix factorization for bounded support data,’’
IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 4, pp. 876–889,
Apr. 2015.
[5] N. K. Ratha, K. Karu, S. Chen, and A. K. Jain, ‘‘A real-time matching
system for large fingerprint databases,’’ IEEE Trans. Pattern Anal. Mach.
Intell., vol. 18, no. 8, pp. 799–813, Aug. 1996.
[6] L. Coetzee and E. C. Botha, ‘‘Fingerprint recognition in low quality
images,’’ Pattern Recognit., vol. 26, no. 10, pp. 1441–1460, 1993.
[7] F. Zeng, S. Hu, and K. Xiao, ‘‘Research on partial fingerprint recog-
nition algorithm based on deep learning,’’ Neural Comput. Appl., 2018.
doi: /10.1007/s00521-018-3609-8.
[8] T. Ojala, M. Pietikäinen, and T. Mäenpää, ‘‘Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns,’’
IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987,
Jul. 2002.
[9] M. Faundez-Zanuy, ‘‘Data fusion in biometrics,’’ IEEE Aerosp. Electron.
Syst. Mag., vol. 20, no. 1, pp. 34–38, Jan. 2005.
[10] J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, ‘‘Object detection in
optical remote sensing images based on weakly supervised learning and
high-level feature learning,’’ IEEE Trans. Geosci. Remote Sens., vol. 53,
no. 6, pp. 3325–3337, Aug. 2015.
[11] M. Nageshkumar, P. Mahesh, and M. N. S. Swamy, ‘‘An efficient
secure multimodal biometric fusion using palmprint and face image,’’
Int. J. Comput. Sci. Issues, vol. 2, no. 1, pp. 49–53, 2009.
[12] S. Khellat-Kihela, R. Abrishambaf, J. L. Monteiro, and M. Benyet-
tou, ‘‘Multimodal fusion of the finger vein, fingerprint and the finger-
knuckle-print using Kernel Fisher analysis,’’ Appl. Soft Comput., vol. 42,
pp. 439–447, May 2016.
[13] J. Yang, Z. Zhong, G. Jia, and Y. Li, ‘‘Spatial circular granulation method
based on multimodal finger feature,’’ J. Elect. Comput. Eng., vol. 2016,
Mar. 2016, Art. no. 7913170.
[14] N. Alajlan, M. S. Islam, and N. Ammour, ‘‘Fusion of fingerprint and
heartbeat biometrics using fuzzy adaptive genetic algorithm,’’ in Proc.
World Congr. Internet Secur. (WorldCIS), Dec. 2013, pp. 76–81.
[15] Z. Ma, Y. Lai, W. B. Kleijn, Y.-Z. Song, L. Wang, and J. Guo, ‘‘Variational
Bayesian learning for Dirichlet process mixture of inverted Dirichlet dis-
tributions in non-Gaussian image feature modeling,’’ IEEE Trans. Neural
Netw. Learn. Syst., vol. 30, no. 2, pp. 449–463, Feb. 2019.
[16] D. Zhang, J. Han, C. Li, J. Wang, and X. Li, ‘‘Detection of co-salient
objects by looking deep and wide,’’ Int. J. Comput. Vis., vol. 120, no. 2,
pp. 215–232, 2016.
28614 VOLUME 7, 2019
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
[17] D. Zhang, D. Meng, and J. Han, ‘‘Co-saliency detection via a self-
paced multiple-instance learning framework,’’ IEEE Trans. Pattern Anal.
Mach. Intell., vol. 39, no. 5, pp. 865–878, May 2017.
[18] Z. Ma, H. Yu, W. Chen, and J. Guo, ‘‘Short utterance based speech lan-
guage identification in intelligent vehicles with time-scale modifications
and deep bottleneck features,’’ IEEE Trans. Veh. Technol., vol. 68, no. 1,
pp. 121–128, Jan. 2019.
[19] L. Hong and A. Jain, ‘‘Integrating faces and fingerprints for personal
identification,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12,
pp. 1295–1307, Dec. 1998.
[20] L. N. Evangelin and A. L. Fred, ‘‘Feature level fusion approach for personal
authentication in multimodal biometrics,’’ in Proc. 3rd Int. Conf. Technol.
Eng. Manage., Mar. 2017, pp. 148–151.
[21] W. Yang, X. Huang, F. Zhou, and Q. Liao, ‘‘Comparative competitive
coding for personal identification by using finger vein and finger dorsal
texture fusion,’’ Inf. Sci., vol. 268, pp. 20–32, Jun. 2014.
[22] J. Peng, Y. Li, R. Li, G. Jia, and J. Yang, ‘‘Multimodal finger feature fusion
and recognition based on delaunay triangular granulation,’’ in Pattern
Recognition (Communications in Computer and Information Science),
vol. 484. Berlin, Germany: Springer, 2014.
[23] T. Sim, S. Zhang, R. Janakiraman, and S. Kumar, ‘‘Continuous verification
using multimodal biometrics,’’ IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 29, no. 4, pp. 687–700, Apr. 2007.
[24] R. Diestel, Graph Theory. Berlin, Germany: Springer-Verlag, 2016.
[25] G. Cheng, P. Zhou, and J. Han, ‘‘Learning rotation-invariant convo-
lutional neural networks for object detection in VHR optical remote
sensing images,’’ IEEE Trans. Geosci. Remote Sens., vol. 54, no. 12,
pp. 7405–7415, Dec. 2016.
[26] P. Foggia, G. Percannella, and M. Vento, ‘‘Graph matching and learning
in pattern recognition in the last 10 years,’’ Int. J. Pattern Recognit. Artif.
Intell., vol. 28, no. 1, 2014, Art. no. 1450001.
[27] B. Luo, R. C. Wilson and, and E. R. Hancock, ‘‘Spectral embedding of
graphs,’’ Pattern Recognit., vol. 36, no. 10, pp. 2213–2230, 2003.
[28] E. E. A. Abusham and H. K. Bashir, ‘‘Face recognition using local graph
structure (LGS),’’ Lect. Notes Comput. Sci., vol. 6762, no. 1, pp. 169–175,
2011.
[29] S. Dong et al., ‘‘Finger Vein Recognition Based on Multi-Orientation
Weighted Symmetric Local Graph Structure,’’ KSII Trans. Internet Inf.
Syst., vol. 9, no. 10, pp. 4126–4142, 2015.
[30] G. Cheung, E. Magli, Y. Tanaka, and M. K. Ng, ‘‘Graph Spectral Image
Processing,’’ Proc. IEEE, vol. 106, no. 5, pp. 907–930, May 2018.
[31] J. Yang, J. Wei, and Y. Shi, ‘‘Accurate ROI localization and hierarchi-
cal hyper-sphere model for finger-vein recognition,’’ Neurocomputing,
vol. 328, pp. 171–181, Feb. 2019.
[32] Z. Ye, J. Yang, and J. H. Palancar, ‘‘Weighted graph based description
for finger-vein recognition,’’ in Proc. China Conf. Biometric Recognitoin,
2017, pp. 370–379.
[33] J. Yang, J. Yang, and Y. Shi, ‘‘Finger-vein segmentation based on multi-
channel even-symmetric Gabor filters,’’ in Proc. IEEE Int. Conf. Intell.
Comput. Intell. Syst., Nov. 2009, pp. 500–503.
[34] J. Chen et al., ‘‘WLD: A robust local image descriptor,’’ IEEE Trans.
Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1705–1720, Sep. 2010.
[35] M. Vlachos and E. Dermatas, ‘‘Vein segmentation in infrared images
using compound enhancing and crisp clustering,’’ in Proc. 6th Int. Conf.
Comput. Vis. Syst. (ICVS). Santorini, Greece: Springer-Verlag, May 2008,
pp. 393–402.
[36] C. B. Yu, H. F. Qin, and Y. Z. Cui, ‘‘Finger-vein image recognition
combining modified Hausdorff distance with minutiae feature matching,’’
Interdiscipl. Sci. Comput. Life Sci., vol. 1, no. 4, pp. 280–289, 2009.
[37] H. G. Zhang, S. Zhang, and Y. Yin, ‘‘Online sequential ELM algorithm
with forgetting factor for real applications,’’ Neurocomputing, vol. 261,
pp. 144–152, Oct. 2017.
[38] C.-W. Hsu and C.-J. Lin, ‘‘A comparison of methods for multiclass support
vector machines,’’ IEEE Trans. Neural Netw., vol. 13, no. 2, pp. 415–425,
Mar. 2002.
[39] A. A. Mohammed and V. Umaashankar, ‘‘Effectiveness of hierarchi-
cal softmax in large scale classification tasks,’’ in Proc. Int. Conf.
Adv. Comput., Commun. Inform. (ICACCI), Bangalore, India, Sep. 2018,
pp. 1090–1094.
[40] B. Fan, F. Wu, and Z. Hu, ‘‘Rotationally invariant descriptors using inten-
sity order pooling,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 34,
no. 10, pp. 2031–2045, Oct. 2012.
[41] Z. Chai, Z. Sun, H. Méndez-Vázquez, R. He, and T. Tan, ‘‘Gabor ordinal
measures for face recognition,’’ IEEE Trans. Inf. Forensics Security, vol. 9,
no. 1, pp. 14–26, Jan. 2014.
[42] Z. Ma, J.-H. Xue, A. Leijon, Z.-H. Tan, Z. Yang, and J. Guo, ‘‘Decorre-
lation of neutral vector variables: Theory and applications,’’ IEEE Trans.
Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 129–143, Jan. 2018.
HAIGANG ZHANG received the B.S. degree from
the University of Science and Technology, Liaon-
ing, in 2012, and the Ph.D. degree from the School
of Automation and Electrical Engineering, Univer-
sity of Science and Technology Beijing, in 2018.
He is currently with the College of Electronic
Information and Automation, Civil Aviation Uni-
versity of China. His research interests include
machine learning, image process, and biometrics
recognition.
SHUYI LI received the B.S. degree in electronic
information engineering from the Changchun Uni-
versity of Technology, in 2016. She is currently
pursuing the master’s degree with the Civil Avia-
tion University of China. Her main research inter-
ests include image process and biometrics recog-
nition.
YIHUA SHI received the B.Sc. degree from Inner
Mongolia Normal University, China, in 1993,
and the M.Sc. degree from Zhengzhou Univer-
sity, China, in 2006. She is currently an Asso-
ciate Professor with the Civil Aviation University
of China. Her major research interests include
machine learning, pattern recognition, computer
vision, multimedia processing, and data mining.
JINFENG YANG received the B.Sc. degree
from the Zhengzhou University of Light Indus-
try, China, in 1994 and the M.Sc. degree from
Zhengzhou University, China, in 2001, and the
Ph.D. degree in pattern recognition and intelligent
system from the National Laboratory of Pattern
Recognition, Institute of Automation, Chinese
Academy of Sciences, China, in 2005. He is cur-
rently a Professor with the Civil Aviation Univer-
sity of China. His major research interests include
machine learning, pattern recognition, computer vision, multimedia pro-
cessing, and data mining. He regularly served as PC Members/Referees
for international journals and top conferences, including, IEEE TIP, IVC,
PR, PRL, ICCV, CVPR, and ECCV. He currently serves as a syndic of the
Chinese Society of Image and Graphics and the Chinese Association for
Artificial Intelligence.
VOLUME 7, 2019 28615

More Related Content

What's hot

Modified Skip Line Encoding for Binary Image Compression
Modified Skip Line Encoding for Binary Image CompressionModified Skip Line Encoding for Binary Image Compression
Modified Skip Line Encoding for Binary Image Compressionidescitation
 
Text Extraction System by Eliminating Non-Text Regions
Text Extraction System by Eliminating Non-Text RegionsText Extraction System by Eliminating Non-Text Regions
Text Extraction System by Eliminating Non-Text RegionsIJCSIS Research Publications
 
Ieee projects 2012 2013 - Datamining
Ieee projects 2012 2013 - DataminingIeee projects 2012 2013 - Datamining
Ieee projects 2012 2013 - DataminingK Sundaresh Ka
 
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTION
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTION
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONsipij
 
Strong Image Alignment for Meddling Recognision Purpose
Strong Image Alignment for Meddling Recognision PurposeStrong Image Alignment for Meddling Recognision Purpose
Strong Image Alignment for Meddling Recognision PurposeIJMER
 
Human Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkHuman Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkTELKOMNIKA JOURNAL
 
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMFUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMAM Publications
 
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONS
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONSOBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONS
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONSijcseit
 
Segmentation of medical images using metric topology – a region growing approach
Segmentation of medical images using metric topology – a region growing approachSegmentation of medical images using metric topology – a region growing approach
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
 
Content Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural NetworksContent Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural Networksijma
 
Improved wolf algorithm on document images detection using optimum mean techn...
Improved wolf algorithm on document images detection using optimum mean techn...Improved wolf algorithm on document images detection using optimum mean techn...
Improved wolf algorithm on document images detection using optimum mean techn...journalBEEI
 
Schematic model for analyzing mobility and detection of multiple
Schematic model for analyzing mobility and detection of multipleSchematic model for analyzing mobility and detection of multiple
Schematic model for analyzing mobility and detection of multipleIAEME Publication
 
Comparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithmsComparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithmsAlexander Decker
 

What's hot (17)

Modified Skip Line Encoding for Binary Image Compression
Modified Skip Line Encoding for Binary Image CompressionModified Skip Line Encoding for Binary Image Compression
Modified Skip Line Encoding for Binary Image Compression
 
Text Extraction System by Eliminating Non-Text Regions
Text Extraction System by Eliminating Non-Text RegionsText Extraction System by Eliminating Non-Text Regions
Text Extraction System by Eliminating Non-Text Regions
 
Ieee projects 2012 2013 - Datamining
Ieee projects 2012 2013 - DataminingIeee projects 2012 2013 - Datamining
Ieee projects 2012 2013 - Datamining
 
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTION
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTION
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTION
 
M1803016973
M1803016973M1803016973
M1803016973
 
Strong Image Alignment for Meddling Recognision Purpose
Strong Image Alignment for Meddling Recognision PurposeStrong Image Alignment for Meddling Recognision Purpose
Strong Image Alignment for Meddling Recognision Purpose
 
Human Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural NetworkHuman Re-identification with Global and Local Siamese Convolution Neural Network
Human Re-identification with Global and Local Siamese Convolution Neural Network
 
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMFUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHM
 
40120140505017
4012014050501740120140505017
40120140505017
 
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONS
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONSOBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONS
OBJECT SEGMENTATION USING MULTISCALE MORPHOLOGICAL OPERATIONS
 
Segmentation of medical images using metric topology – a region growing approach
Segmentation of medical images using metric topology – a region growing approachSegmentation of medical images using metric topology – a region growing approach
Segmentation of medical images using metric topology – a region growing approach
 
Content Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural NetworksContent Based Image Retrieval : Classification Using Neural Networks
Content Based Image Retrieval : Classification Using Neural Networks
 
Object Recognition Using Shape Context with Canberra Distance
Object Recognition Using Shape Context with Canberra DistanceObject Recognition Using Shape Context with Canberra Distance
Object Recognition Using Shape Context with Canberra Distance
 
Improved wolf algorithm on document images detection using optimum mean techn...
Improved wolf algorithm on document images detection using optimum mean techn...Improved wolf algorithm on document images detection using optimum mean techn...
Improved wolf algorithm on document images detection using optimum mean techn...
 
Schematic model for analyzing mobility and detection of multiple
Schematic model for analyzing mobility and detection of multipleSchematic model for analyzing mobility and detection of multiple
Schematic model for analyzing mobility and detection of multiple
 
Ijetr021113
Ijetr021113Ijetr021113
Ijetr021113
 
Comparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithmsComparative analysis and evaluation of image imprinting algorithms
Comparative analysis and evaluation of image imprinting algorithms
 

Similar to Graph fusion of finger multimodal biometrics

COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSCOMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSijcsit
 
A Review Paper on Personal Identification with An Efficient Method Of Combina...
A Review Paper on Personal Identification with An Efficient Method Of Combina...A Review Paper on Personal Identification with An Efficient Method Of Combina...
A Review Paper on Personal Identification with An Efficient Method Of Combina...IRJET Journal
 
Face Recognition for Human Identification using BRISK Feature and Normal Dist...
Face Recognition for Human Identification using BRISK Feature and Normal Dist...Face Recognition for Human Identification using BRISK Feature and Normal Dist...
Face Recognition for Human Identification using BRISK Feature and Normal Dist...ijtsrd
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...IOSR Journals
 
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...CSCJournals
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face RecognitionIJCSIS Research Publications
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET Journal
 
Paper id 25201441
Paper id 25201441Paper id 25201441
Paper id 25201441IJRAT
 
Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...TELKOMNIKA JOURNAL
 
Feature extraction techniques on cbir a review
Feature extraction techniques on cbir a reviewFeature extraction techniques on cbir a review
Feature extraction techniques on cbir a reviewIAEME Publication
 
Novel framework for optimized digital forensic for mitigating complex image ...
Novel framework for optimized digital forensic  for mitigating complex image ...Novel framework for optimized digital forensic  for mitigating complex image ...
Novel framework for optimized digital forensic for mitigating complex image ...IJECEIAES
 
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...CSCJournals
 
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG sipij
 
Technique for recognizing faces using a hybrid of moments and a local binary...
Technique for recognizing faces using a hybrid of moments and  a local binary...Technique for recognizing faces using a hybrid of moments and  a local binary...
Technique for recognizing faces using a hybrid of moments and a local binary...IJECEIAES
 
Face Recognition Smart Attendance System: (InClass System)
Face Recognition Smart Attendance System: (InClass System)Face Recognition Smart Attendance System: (InClass System)
Face Recognition Smart Attendance System: (InClass System)IRJET Journal
 
Multibiometric Secure Index Value Code Generation for Authentication and Retr...
Multibiometric Secure Index Value Code Generation for Authentication and Retr...Multibiometric Secure Index Value Code Generation for Authentication and Retr...
Multibiometric Secure Index Value Code Generation for Authentication and Retr...ijsrd.com
 

Similar to Graph fusion of finger multimodal biometrics (20)

COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMSCOMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
COMPARATIVE ANALYSIS OF MINUTIAE BASED FINGERPRINT MATCHING ALGORITHMS
 
G0333946
G0333946G0333946
G0333946
 
A Review Paper on Personal Identification with An Efficient Method Of Combina...
A Review Paper on Personal Identification with An Efficient Method Of Combina...A Review Paper on Personal Identification with An Efficient Method Of Combina...
A Review Paper on Personal Identification with An Efficient Method Of Combina...
 
Face Recognition for Human Identification using BRISK Feature and Normal Dist...
Face Recognition for Human Identification using BRISK Feature and Normal Dist...Face Recognition for Human Identification using BRISK Feature and Normal Dist...
Face Recognition for Human Identification using BRISK Feature and Normal Dist...
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
 
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face Recognition
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
 
Paper id 25201441
Paper id 25201441Paper id 25201441
Paper id 25201441
 
Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...
 
Feature extraction techniques on cbir a review
Feature extraction techniques on cbir a reviewFeature extraction techniques on cbir a review
Feature extraction techniques on cbir a review
 
Novel framework for optimized digital forensic for mitigating complex image ...
Novel framework for optimized digital forensic  for mitigating complex image ...Novel framework for optimized digital forensic  for mitigating complex image ...
Novel framework for optimized digital forensic for mitigating complex image ...
 
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
 
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
 
33 102-1-pb
33 102-1-pb33 102-1-pb
33 102-1-pb
 
Technique for recognizing faces using a hybrid of moments and a local binary...
Technique for recognizing faces using a hybrid of moments and  a local binary...Technique for recognizing faces using a hybrid of moments and  a local binary...
Technique for recognizing faces using a hybrid of moments and a local binary...
 
Face Recognition Smart Attendance System: (InClass System)
Face Recognition Smart Attendance System: (InClass System)Face Recognition Smart Attendance System: (InClass System)
Face Recognition Smart Attendance System: (InClass System)
 
Multibiometric Secure Index Value Code Generation for Authentication and Retr...
Multibiometric Secure Index Value Code Generation for Authentication and Retr...Multibiometric Secure Index Value Code Generation for Authentication and Retr...
Multibiometric Secure Index Value Code Generation for Authentication and Retr...
 
K0167683
K0167683K0167683
K0167683
 
algorithms
algorithmsalgorithms
algorithms
 

More from Anu Antony

Bimodal Biometric Authentication
Bimodal Biometric AuthenticationBimodal Biometric Authentication
Bimodal Biometric AuthenticationAnu Antony
 
Teenage Depression
Teenage DepressionTeenage Depression
Teenage DepressionAnu Antony
 
Biometric Recognition using Multimodal Physiological Signals
Biometric Recognition using Multimodal Physiological SignalsBiometric Recognition using Multimodal Physiological Signals
Biometric Recognition using Multimodal Physiological SignalsAnu Antony
 
IEEE BASE paper on artifical retina using TTF technology
IEEE BASE paper on artifical retina using TTF technologyIEEE BASE paper on artifical retina using TTF technology
IEEE BASE paper on artifical retina using TTF technologyAnu Antony
 
Unit 4 biomedical
Unit 4 biomedicalUnit 4 biomedical
Unit 4 biomedicalAnu Antony
 
Unit 3 biomedical
Unit 3 biomedicalUnit 3 biomedical
Unit 3 biomedicalAnu Antony
 
Unit 2 biomedical
Unit 2 biomedicalUnit 2 biomedical
Unit 2 biomedicalAnu Antony
 
Unit 1 biomedical
Unit 1 biomedicalUnit 1 biomedical
Unit 1 biomedicalAnu Antony
 
Vlsi design notes l7 a&b batch , ece , sngce
Vlsi design notes l7 a&b batch , ece , sngceVlsi design notes l7 a&b batch , ece , sngce
Vlsi design notes l7 a&b batch , ece , sngceAnu Antony
 
medical electronics 5 module ppt
medical electronics 5 module pptmedical electronics 5 module ppt
medical electronics 5 module pptAnu Antony
 
Automatic street light
Automatic street lightAutomatic street light
Automatic street lightAnu Antony
 

More from Anu Antony (11)

Bimodal Biometric Authentication
Bimodal Biometric AuthenticationBimodal Biometric Authentication
Bimodal Biometric Authentication
 
Teenage Depression
Teenage DepressionTeenage Depression
Teenage Depression
 
Biometric Recognition using Multimodal Physiological Signals
Biometric Recognition using Multimodal Physiological SignalsBiometric Recognition using Multimodal Physiological Signals
Biometric Recognition using Multimodal Physiological Signals
 
IEEE BASE paper on artifical retina using TTF technology
IEEE BASE paper on artifical retina using TTF technologyIEEE BASE paper on artifical retina using TTF technology
IEEE BASE paper on artifical retina using TTF technology
 
Unit 4 biomedical
Unit 4 biomedicalUnit 4 biomedical
Unit 4 biomedical
 
Unit 3 biomedical
Unit 3 biomedicalUnit 3 biomedical
Unit 3 biomedical
 
Unit 2 biomedical
Unit 2 biomedicalUnit 2 biomedical
Unit 2 biomedical
 
Unit 1 biomedical
Unit 1 biomedicalUnit 1 biomedical
Unit 1 biomedical
 
Vlsi design notes l7 a&b batch , ece , sngce
Vlsi design notes l7 a&b batch , ece , sngceVlsi design notes l7 a&b batch , ece , sngce
Vlsi design notes l7 a&b batch , ece , sngce
 
medical electronics 5 module ppt
medical electronics 5 module pptmedical electronics 5 module ppt
medical electronics 5 module ppt
 
Automatic street light
Automatic street lightAutomatic street light
Automatic street light
 

Recently uploaded

UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGMANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGSIVASHANKAR N
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 

Recently uploaded (20)

UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGMANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 

Graph fusion of finger multimodal biometrics

  • 1. SPECIAL SECTION ON AI-DRIVEN BIG DATA PROCESSING: THEORY, METHODOLOGY, AND APPLICATIONS Received February 3, 2019, accepted February 13, 2019, date of publication February 28, 2019, date of current version March 18, 2019. Digital Object Identifier 10.1109/ACCESS.2019.2902133 Graph Fusion for Finger Multimodal Biometrics HAIGANG ZHANG 1, SHUYI LI1, YIHUA SHI1, AND JINFENG YANG1,2 1Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China 2Shenzhen Polytechnic, Shenzhen 518055, China Corresponding author: Jinfeng Yang (jfyang@cauc.edu.cn) This work was supported by the National Natural Science Foundation of China under Grant 61806208. ABSTRACT In terms of biometrics, a human finger itself is with trimodal traits including fingerprint, finger-vein, and finger-knuckle-print, which provides convenience and practicality for finger trimodal fusion recognition. The scale inconsistency of finger trimodal images is an important reason affecting effective fusion. It is therefore very important to developing a theory of giving a unified expression of finger trimodal features. In this paper, a graph-based feature extraction method for finger biometric images is proposed. The feature expression based on graph structure can well solve the problem of feature space mismatch for the finger three modalities. We provide two fusion frameworks to integrate the finger trimodal graph features together, the serial fusion and coding fusion. The research results can not only promote the advancement of finger multimodal biometrics technology but also provide a scientific solution framework for other multimodal feature fusion problems. The experimental results show that the proposed graph fusion recogni- tion approach obtains a better and more effective recognition performance in finger biometrics. INDEX TERMS Finger biometrics, multimodal feature, fusion recognition, graph theory. I. INTRODUCTION Biometrics is a type of technology which allows a person to be identified and authenticated according to a set of recog- nizable and specific data [1]. Due to the high user acceptance and convenience, the biometric authentication is taking over the traditional recognition technologies. With the increasing demand of accurate human authentication, however, using only unimodal biometric feature for person identification often fails to achieve satisfactory results [2]. In contrast, multimodal fusion biometrics technology, by integrating uni- modal feature information, usually behaves better in terms of universality, accuracy and security [3], [4]. Fingers contain rich biometric information, and the finger-based biometrics technology has been developing for a long time [5], [6]. There are three biometric patterns dwelling in a finger, fingerprint(FP), finger-vein(FV) and finger-knuckle-print(FKP). With convenient collection and relatively compact distribution of trimodal sources of a finger, finger-based multimodal biometric fusion recognition has more universal and practical advantages [7], [8]. Because of the mismatch of finger trimodal feature spaces, to explore a The associate editor coordinating the review of this manuscript and approving it for publication was Zhanyu Ma. reliable and effective finger multimodal features expression and fusion recognition approach, however, has still been a challenging problem in practice [9], [10]. There are four levels of fusion in a multimodal biometric system, pixel level [11], feature level [12], [13], matching score level [14] and decision level [15]. The fusion at the pixel level, also called as sensor fusion, refers to direct fusion of multi-modal images acquired by sensors. Since large dif- ferences exist between different modal images in content, noise, etc., it is unstable and infeasible to make fusion for the trimodal biometrics of fingers at pixel level [16]. The fusion at matching level and decision level are respectively based on the matching results and the recognition results of the single-modal biometric features. Although these strategies are easy to implement on the system, there are many inher- ent defects, such as strong algorithm dependence, low effi- ciency and limited recognition performance. More critically, the fusion at matching layer or decision layer ignores the discriminating power of multimodal biometrics [17], [18]. Therefore, such fusion strategy is not optimal in nature. The fusion at feature level aims to extract more accurate feature information by performing a series of processing on the biometric images. Compared with other fusion strategies, the fusion at feature level can improve the discriminating VOLUME 7, 2019 2169-3536 2019 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. 28607
  • 2. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics ability of fusion features, and presents much more stable and effective fusion performance [4], [9]. Recently, some relative research works about the fusion of finger biometrics have been reported. Alajlan et al. [14] pro- posed a biometric identification method based on the fusion of FP and heartbeat at the decision level. Reference [19] introduced a biometric system that integrates face and FP. Evangelin and Fred [20] proposed the fusion identification frame of FP, palm print and FKP on feature level. In 2016, Khellat-Kihel et al. [12] proposed a recognition framework for the fusion of trimodal biometrics on the finger on the feature and decision level. Reference [21] developed a magnitude-preserved competitive coding feature extraction method for FV and FKP images, and obtained robust recog- nition performance after fusion of two biometric features. In addition, some feature extraction methods can promote efficient fusion of finger trimodal images on the feature layer, such as GOM method [39] and MRRID method [38]. It is worth mentioning that in our previous work, we discussed the granulation fusion method of finger trimodal biological features, and solved the basic theoretical problems of fusion by using particle computing as a methodology [13], [22]. The existing fusion algorithms have some shortcomings: ♦ Unreasonable image scale consistency. Finger trimodal scale inconsistency is still an important factor affecting effec- tive fusion. The straightforward scale normalization is easy to destroy image structure information, and stable feature fusion cannot be achieved. ♦ Large storage space and long recognition time. Practi- cality is a factor that should be considered in finger trimodal fusion recognition. The scale inconsistency or incompatibility of finger tri- modal images is the most critical problem that causes fusion difficulties. It is very important to develop a theory of giv- ing a unified expression of the trimodal features of the finger [23]. The rapid development of graph-based image analysis technology motivates us extracting graph structure features for the finger trimodalities. The related research of graph theory began in 1736, which is an important branch of mathematics [24], [25]. Graphs are often used to describe a certain connection between objects, or a specific relation- ship between various parts of a corresponding object. As an image description method, the graph can well describe the various structural relationships in the image. In the past decades, many graph-based methods have been pro- posed for dealing with pattern recognition problems [26]. Luo et al. [27] employed the graph embedding method for 3D image clustering. Some graph-based coding oper- ators have been well applied in feature coding, like LGS and WLGS operators [28], [29]. Reference [30] overviews the recent graph spectral techniques in graph signal pro- cessing (GSP) specifically for image and video processing, including image compression, image restoration, image fil- tering and image segmentation. In this paper, we explore the description of finger trimodal features based on graph theory, and propose the reasonable fusion framework. First, a directional weighted graph is established for each modality of finger, where a block-wise operation is done to generated the vertices. The Steerable fil- ter is applied to extract the oriented energy distribution (OED) of each block, and the weighted function of vertexes and edges of graph is calculated. Based on different block par- titioning strategies, the trimodal graph features can solve the problem of dimensional incompatibility, which lays the foundation for trimodal fusion. Then we propose two finger trimodal feature fusion frameworks named serial and cod- ing fusions, which can integrate trimodal features of finger together. The serial fusion refers to splicing the graph features of finger three modalities, while in coding fusion framework, the competitive coding way is applied to combine the three graph features to a comprehensive feature. In addition, any famous classifies connected after the fusion feature can be used for pattern recognition. In order to demonstrate the validity of the proposed fusion recognition frameworks for finger trimodal biometrics, we design a finger trimodal images acquisition device simulta- neously, and construct a homemade finger trimodal database. Simulation results have demonstrated the effective and sat- isfied performance of the proposed finger biometric fusion recognition frameworks. II. TRIMODAL IMAGE ACQUISITION To obtain FP, FV and FKP images, we have designed a homemade finger trimodal imaging system [31], as is shown in Fig. 1. The proposed imaging system, which can simul- taneously collect FP, FV and FKP images, integrates modal acquisition, image storage, image processing and recognition modules. Then we constructed a finger trimodal database. FIGURE 1. The schematic of finger trimodal imaging device. In order to reduce the finger pose variations, a fixed col- lection groove is designed in the image acquisition device. It can avoid the image distortion problems due to finger offset and rotation as much as possible. In the proposed imaging system, the collection of FP image is relatively simple, which can be obtained through the FP collector at the bottom. For FV modality imaging, the used array luminaire is made of near infrared (NIR) light-emitting diodes (LEDs) with a wavelength of 850nm. The light source which can penetrate a finger is on the palm side of the finger, while the imaging lens is on the back side of the finger. Based on the principle of 28608 VOLUME 7, 2019
  • 3. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics reflection of visible light source on the skin surface on finger, the FKP modality is imaged by reflected lights. By a software interface, as shown in Fig. 2, the finger trimodal image acquisition task can be implemented conve- niently. From the left of Fig. 2, we can see that the cap- tured image contains not only the useful region but also some uninformative parts. Hence, the region of interest (ROI) localization for the original trimodal images is critical. The ROI localization strategy of three modalities is presented in the appendix. In addition, the original images are often blurred due to the influence of light and noise. The image enhancement can highlight more texture information for fin- ger three modality images, which is very helpful to improve recognition accuracy. The image enhancement method and some results are presented in Sec. III during the introduction of graph establishment. FIGURE 2. A software interface of imaging acquisition. III. GRAPH FEATURE EXTRACTION AND FUSION FRAMEWORK In this section, we establish the weighted graph structure model to characterize the finger biometrics, and present the fusion frameworks for the trimodal images of finger. We divide the original modal image into many blocks, and the Steerable filters with arbitrary orientations is applied to describe the randomness energy consisting in the local blocks. Based on the graph features, we design two fusion frameworks named serial fusion and coding fusion, which can fully integrate trimodal information and improve the recognition accuracy and robustness. A. GRAPH FEATURE EXTRACTION The weighted graph for the finger biometric image can be rep- resented as G (V, E, W), where V represents the vertex set, E represents the edge set, and W are the weights of the edges representing the relationships among the vertexes [32], [42]. There are usually two main ways to select the vertexes of the graph. One can treat the pixels as the vertices of the graph. However, it will result in a larger graph and a larger amount of computation, when the size of image is relatively large. For another way, the locations and types of minutiae, such as the crossovers, bifurcations and terminations, can be selected as the vertices of the graph. The selection of minutiae points is often heavily influenced by humans and lack of robustness when image quality is poor. Here for the graph establishment, we divide the image into blocks and select the features of blocks as the vertices of the graph, as shown in Fig. 3. For an image with size of M × N, we design a sliding window with size of h × w, which would traverse the entire image. The striding steps along the right and down directions are recorded as r and d respectively. Then one can get the number of vertices of the graph as Num = round M−h + r r + 1 × round N−w + d d + 1 (1) where round (·) is the rounding operator, which means if M−h+r r is not an integer, round it up to the nearest integer. FIGURE 3. Block division for the selecting of vertices. FIGURE 4. Graph establishment of finger biometrics. For one finger biometric image, the local region has high correlation with its adjacent region, while the correlation is relatively low for the two regions with relatively large distance. Therefore, we take only the relationships between the corresponding vertex with the surrounding vertices of specific directions, as shown in Fig. 4. It is worth mentioning that compared with the undirected graph, directed graph is better in describing the local relations of an image. Based on the triangulation algorithm, one would establish the edges between the target vertex with the ones along its down, right and bottom right directions. The weighted function is used to describe the connotation expression between vertices, which can attach the details of image for the graph through deter- mine the appropriate calculation method for the weighted function. We define the weighted function as w Vi, Vj|eij ∈ E = W (Vi) · S Vi, Vj (2) where W (Vi) represents the comprehensive feature in the block containing vertex (Vi), while S Vi, Vj is the similarity degree of two block features. VOLUME 7, 2019 28609
  • 4. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics In order to reliably strength the finger texture information, finger three modalities images need to be enhanced effec- tively. Here, a bank of Gabor filters with 8 orientations [33] and Weberąŕs Law Descriptor (WLD) [34] are combined for images enhancement and light attenuation elimination. In order to extract reliable finger skeletons, a multi-threshold method [35] is used to obtain the binary results of the enhanced finger image. A thinning algorithm proposed in [36] is used to extract the finger skeletons of the binary images, some results are shown in Fig. 5(a)(b)(c). FIGURE 5. The results during skeleton extraction. (a) An original image. (b) The enhanced result. (c) The skeleton of binary image. (d) A block of (c). (e) Oriented energy. The value of the vertex represents the comprehensive fea- ture in the corresponding block region, while the weighted function of two connected vertices represents their similarity. We make the selection of vertex features and the calculation of weighted functions using the oriented energy distribution feature based on Steerable filter. The Steerable filter has good performance for the analysis of the curve structure of the image. By integrating a set of base filter banks in different directions, the image can be convolved in any direction to extract the image texture in the corresponding direction. The Oriented Energy Distribu- tion (OED) represents the convolution results of the Steerable filter and the image region in different directions. The OED feature is calculated as hθ (x, y) = K j=1 kj (θ) hj (x, y) (3) E (θ) =   K j=1 M x=1 N y=1 hθ (x, y) I (x, y)   2 (4) OED = {E (1), E (2), · · ·, E (θ)} (5) where hθ (x, y) represents the Steerable filter in the direction of θ. K j=1 hj (x, y) is a set of base filter banks and K is the total number base filters. kj (θ) is the interpolation function, which is only related with θ. I (x, y) represents the corresponding image region. E (θ) is the convolution energy in the direction of θ. Fig. 5(e) illustrates an obtained energy map correspond- ing to a block located at Fig. 5(d). The maximum energy feature is selected as the vertex weight, while the similarity of OEDs of two connected ver- tices is treated as the weighted function. Such that V (I) = max (OED|I) (6) S Vi, Vj = exp − θ k Ei (k) − Ej (k) 2 2σ2 (7) The adjacency matrix can be used to describe graph struc- ture of finger modalities, where the elements in the adjacency matrix represents the weighted functions. Then any classifiers can make pattern recognition based on the adjacency matrix expression of finger modalities. B. FEATURE FUSION In the above subsection, we present a novel feature extraction frame based on the graph structure for the finger biometric images. Following shows two fusion frameworks for three modalities of finger. 1) SERIAL FUSION STRATEGY Considering a weighted graph with Num vertices, the adja- cency matrix (abbreviated to M) is a Num × Num matrix, of which the (i, j) term is the weighted function of the ith and jth vertex, such that M (i, j) = S Vi, Vj , if eij ∈ E 0, otherwise (8) Because we only take the relationship between adjacent vertices into consideration when establishing feature graph, most of the elements in the adjacency matrix are zero. There- fore, the adjacency matrix is a sparse matrix. For the sparse adjacency matrix, there is no need to save the entire matrix elements. The compressed sparse row (CSR, also called com- pressed row storage (CRS)) format is an effective expression for the sparse matrix, which contains three (one-dimensional) arrays, representing nonzero values, row indices, and column indices respectively. Fig. 6 presents a simple example for the CSR expression of sparse matrix. Since we have set the rules for graph establishment for finger biometrics image, the position of the non-zero elements are determined in the adjacency matrix. In the CSR expression, we only need to retain array A and ignore I and J. FIGURE 6. CSR expression of sparse matrix. 28610 VOLUME 7, 2019
  • 5. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics The serial fusion strategy refers to the tandem splicing of trimodal CSR vectors and gets a fused vector. Fig. 7 presents the serial fusion framework. Ap, Av and Akp are feature vec- tors extracted through three adjacency matrixes after graph feature extraction. Then we apply the serial fusion strategy to integrate three feature vectors into one feature fused vector, such that V = Ap Av Akp . The adjacency matrix based fusion framework is simple. Under the premise of ensur- ing complete features, the matrix dimension is reduced and the computational efficiency is improved. For the feature learning and matching process, one would apply the famous classification algorithms in machine learning fields, such as Support Vector Machine (SVM), Extreme Learning Machine (ELM) and Softmax classifier. FIGURE 7. Serial fusion of the trimodal graph features of finger. (a) ROI of trimodal images of finger. (b) Graph structure. (c) Adjacency matrixes. (d) CSR vector expression. (e) Serial fusion. 2) CODING FUSION STRATEGY Another fusion framework for trimodal graph features based on competitive coding is presented. When establishing the graph, the block segmentation operation is applied to generate the vertex expression. By properly setting the segmentation sizes and sliding steps, it is easy to make normalization for the trimodal feature graphs. We would obtain three adjacency matrixes with the same size. Then the competitive coding fusion method is applied to combine the three adjacency matrixes into one integrated matrix. Fig. 8 presents the fusion strategy based on competitive coding. For the target pixels in the same position of the trimodal adjacency matrixes, we employ a seven-bit binary code ([c1, c2, c3, c4, c5, c6, c7]) to represent the feature infor- mation. Considering the (i, j)th pixels of trimodal graphs, c4 represents the comprehensive feature of FVi,j, FPi,j and FIGURE 8. Coding fusion of the trimodal graph features of finger. FKPi,j, while (c3, c5), (c2, c6), (c1, c7) are the competitive coding results of pixels at the left and right of the target pixel for the FP, FV and FKP features respectively. For example, if FVi,j−1 > FVi,j+1, (c3, c5) = (1, 0). It is worth mentioning that the trimodal image pixels should be normalized before coding fusion. A general coding way for trimodal fusion is as follows. (ci, c8−i) =    (1, 0), if Fi,j−1 > Fi,j+1 (0, 1), if Fi,j−1 < Fi,j+1 (1, 1), if Fi,j−1 = Fi,j+1 c4 = 1, if FVi,j ≥ FPij + FKPij /2 0, if FVi,j < FPij + FKPij /2 (9) where F represents the trimodal biometric images (FP, FV, FKP) and i = 1, 2, 3. Through the proposed fusion strategy based on competitive coding, we integrate three adjacency matrices together. For the feature matching process, one may apply the traditional classifier, or the similarity of two matrices (named Matrix Matching method). The computational formula for the simi- larity of two matrices is as follows. Ms = n p=1 n q=1 A (p, q)− ¯A B (p, q)− ¯B n p=1 n q=1 A (p, q)− ¯A 2 n p=1 n q=1 B (p, q)− ¯B 2 (10) where ¯A and ¯B represent the mean values of fusion adjacency matrix A and B respectively. The matrix matching method is easy to operate and to be understood. However, the recognition efficiency of matrix matching method is related to the dimension of the matrix. When the scale of the graph increases, the dimension of the adjacency matrix rises sharply, which will greatly increase the matching computational burden. IV. SIMULATION RESULTS In this section, we present the simulation results for the unimodal recognition and trimodal fusion recognition of fin- gers. In our homemade finger trimodal database, there are 17550 images in total from 585 fingers. For one finger of one category, we collect 10 FP, FV and FKP images respectively. A. UNIMODAL RECOGNITION The graph features of finger three modalities can be used for identification separately. Here the simulation results of finger unimodal recognition is presented. Before presenting the results, we show some necessary explanations of some common statistical variables. FAR, fault acceptance rate for short, measures a recognition result of incorrectly identifying an individual, while FRR is the abbreviated form fault rejec- tion rate, which indicates the result of incorrectly denying an individual. The receiver operating characteristic (ROC) curve VOLUME 7, 2019 28611
  • 6. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics is created by plotting the FAR against the FRR at various threshold settings. The size of divided block of finger modal images for the establishment of graph is an important parameter. Fig. 9 shows the ROC curves of different block partitioning sizes without overlap during the sliding. The ROC curves of trimodel features of finger show consistent changes. The smaller block size corresponds to the better ROC curve, as well as smaller equal error rate (EER). The size of sliding block affects the number of vertices in the graph. The smaller the size of the sliding block, the more encrypted the set of graph vertices, which leads to the longer time of graph estab- lishment. Fig. 10 presents the time of graph establishment of trimodal features of finger based on different sliding block sizes. with the increase of the size of sliding block, the time of graph establishment of trimodal features of finger decreases. For the finger unimodal biometric recognition, one can link any classifier after the graph features. In the simulations, we apply four discriminant algorithms for the recognition, FIGURE 9. The ROC curve of block division of finger trimodal features. (a) FV. (b) FKP. (c) FP. FIGURE 10. The establishment time of graph of different sizes of block division. ELM [37], SVM [38], softmax [39] and matrix matching. In ELM algorithm, we set 1000 hidden nodes with the ’Sigmoid’ activation function. Some important parameters in SVM algo- rithm are set default values. The parameter specifying the scaling factor in the Gaussian radial basis function kernel is set as 1. We use the Quadratic Programming method to find the separating hyperplane. The max iterative step is 15000, and the ’rbf’ kernel function is applied. In softmax algorithm, the weight decay parameter lambda is set as 10−4, while the max iterative step is 50. Considering the accuracy and efficiency are two important indicators for measuring the performance of biometric algo- rithms, we compare the recognition accuracy and recognition time (the same as testing time) among the four classification algorithms. In the simulation, nine images of one finger are used for training, while the rest one for testing. Tab. 1 presents the simulation results for recognition accuracy of FV, FKP and FP patterns respectively based on the four classifiers. The recognition performances of the three modalities of fin- ger based on four classifiers are consistent. ELM algorithm shows the best recognition performance, while the perfor- mance of matrix matching method is not very satisfactory. As the size of the block becomes larger, the recognition accuracy of each algorithm decreases, which is consistent with the results shown by ROC curves. TABLE 1. Recognition accuracy for finger unimodal recognition. Fig. 11 shows the testing time of four classifiers. In order to show a more visual gap for the testing time of four classifiers in the same figure, we made a slight modification to the posi- tion of the curve. For example, the value of the green curve plus 150 is the final result for the matrix matching method in the first subfigure. Because of the random selection of hidden nodes, the testing time of ELM algorithm is most satisfactory. In comparison, the testing time of matrix matching presents a gap of several hundred times. 28612 VOLUME 7, 2019
  • 7. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics FIGURE 11. Testing time for the finger single-modal recognition based on different classifiers. B. TRIMODAL FUSION RECOGNITION This subsection presents the recognition performance of serial fusion and coding fusion of trimodal graph features of finger. In the serial fusion, three graph features are stitched together. Coding fusion makes feature informa- tion more complex and has consistent dimensions compared with unimodal graph feature, which is more conducive to anti-counterfeiting and storage. Considering the classifica- tion performance of four classifiers, ELM algorithm with 1000 hidden nodes and ’sin’ activation function is applied for trimodal fusion recognition of finger. The recognition accuracy is 99.9% for serial fusion and 99.8% for coding fusion, which is approximately the same as the recognition accuracy of unimodal graph feature. In order to verify the stable and robust performance of the fusion recognition frame, we added the white noise with 20 as the mean and 10 as the variance to the trimodal features of the finger respectively. Tab. 2 presents the simulation results for the comparison between unimodal and trimodal fusion recognition of finger. Due to the interference of external noise, the recognition performances of both unimodal and trimodal features decrease to some extent. However, the per- formance of trimodal fusion recognition is more robust and stable compared with that of unimodal recognition of finger. The coding fusion shows slightly better results compared with serial fusion. For the testing time, the trimodal fusion recognition takes a relatively long time. However, it is within the allowable range. Fusion at the feature layer is the most stable and accu- rate way of trimodal fusion in biometrics. Here we conduct some comparison experiments between the proposed fusion frames and two fusion strategies at the feature layer [13] [22]. In addition, three feature extraction methods, Oricode [37], TABLE 2. Comparison results for the unimodal and trimodal biometrics. TABLE 3. Comparison results for different fusion framework. FIGURE 12. Comparison results of the ROC for different fusion methods. MRRID [40] and GOM [41], are employed in finger trimodal images, and ELM algorithm with the same learning param- eters is applied to make recognition after feature fusion. Table 3 presents the simulation results where we can see the proposed fusion frames for finger trimodal biometric fea- tures have more accurate and efficient performance compared with other fusion methods. In construct, the performance of the graph coding fusion frame is slightly better than the graph series fusion frame. For the recognition time, the graph coding fusion frame obtains the best recognition efficiency. In addition, we present the ROC lines in Fig. 12, which proves the superiority of the proposed fusion frames from another point of view. In summary, the proposed graph fusion strategies can make more effective and stable fusion and recognition for the finger trimodal biometric images with relatively less recognition time. V. CONCLUSION This paper focuses on the finger biometric multimodal fusion recognition. The finger contains three biometric modality, fingerprint, finger-vein and finger-knuckle-print, which all have certain discriminative properties and are capable of identification. In this paper, a feature extraction model based on graph structure presentation is established, which can be solve the problem that the finger trimodal feature space does not match. Two fusion frameworks for finger multimodal features are provided. The serial fusion stitches the trimodal graph features together, while coding fusion integrates tri- modal graph features into one comprehensive feature based VOLUME 7, 2019 28613
  • 8. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics on competitive coding. Finger trimodal fusion recognition can improve recognition accuracy and robustness. In the experiment, a homemade finger trimodal dataset has been applied to demonstrate the better performance of the pro- posed finger trimodal fusion recognition frameworks. Future Work: Extraction the graph features for the finger trimodal images solves the problem of scale inconsistency, which is the key for the effective and stable fusion recogni- tion for finger biometrics. This attempt is not only effective for finger multimodal fusion, but also universal for other biomodalities. The related future work is as follows: (1) Finger biometric image enhancement is the premise of extracting graph features. Inappropriate filter parameters or filter structure without generalization can cause damage to the finger biometric images. Texture breaks are very com- mon. Thus, in the future work, we would establish the image repainting model in order to improve graph feature accuracy. (2) Recently, the Graph Convolutional Neural Net- work (GCNNs) shows satisfactory results in processing graph structure data, which motivates us to employ GCNNs to make recognition based on graph features of finger trimodal images. In addition, establishing a deep fusion model based on GCNNs for finger trimodalities is also the future work. APPENDIX ROI LOCALIZATOIN In the appendix, we present the ROI localization method for the finger trimodal images. FP image collection is relatively mature and the collection area is highly targeted. One can directly use the real acquired image as the ROI. For FV and FKP images, we design the corresponding ROI localization strategy. Fig. 13 presents the main results of ROI localization for FV and FKP images, while (a) presents the FV images, (b) shows the outlines of FV pattern, while (c) are the FKP images. The joint cavity of finger is filled with biological tissue effusion, where the light transmittance is higher than FIGURE 13. ROI localization for the FV and FKP images. that through the surrounding tissue. We apply the difference in light transmission to conduct ROI localization of the FV image. Due to the finger blocking the light-transmitting plate, it is easy to localize the left and right outline of the finger. After the determination of the outlines of FV image, we make rotation for the image to keep correct direction. Additionally, ROI would also be acquired by the outlines of the image. The FKP images have no obvious characteristic marks, resulting that it is difficult to locate the ROI in the vertical direction. According to the biological structure of the finger, the ROI of FKP locates on the back side of proximal joint. During the ROI localization of FV, we have obtained the position of the proximal joint region. Through spatial map- ping, the position of the proximal joint can be mapped to the knuckle imaging spece from the FV imaging space. REFERENCES [1] A. K. Jain, A. Ross, and S. Prabhakar, ‘‘An introduction to biometric recognition,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1, pp. 1–29, 2004. [2] Q. Zhao, L. Zhang, D. Zhang, N. Luo, and J. Bao, ‘‘Adaptive pore model for fingerprint pore extraction,’’ in Proc. 19th Int. Conf. Pattern Recognit., Dec. 2008, pp. 1–4. [3] J. Yang and Y. Shi, ‘‘Finger–vein ROI localization and vein ridge enhance- ment,’’ Pattern Recognit. Lett., vol. 33, no. 12, pp. 1569–1579, 2012. [4] Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, ‘‘Variational Bayesian matrix factorization for bounded support data,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 4, pp. 876–889, Apr. 2015. [5] N. K. Ratha, K. Karu, S. Chen, and A. K. Jain, ‘‘A real-time matching system for large fingerprint databases,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 8, pp. 799–813, Aug. 1996. [6] L. Coetzee and E. C. Botha, ‘‘Fingerprint recognition in low quality images,’’ Pattern Recognit., vol. 26, no. 10, pp. 1441–1460, 1993. [7] F. Zeng, S. Hu, and K. Xiao, ‘‘Research on partial fingerprint recog- nition algorithm based on deep learning,’’ Neural Comput. Appl., 2018. doi: /10.1007/s00521-018-3609-8. [8] T. Ojala, M. Pietikäinen, and T. Mäenpää, ‘‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987, Jul. 2002. [9] M. Faundez-Zanuy, ‘‘Data fusion in biometrics,’’ IEEE Aerosp. Electron. Syst. Mag., vol. 20, no. 1, pp. 34–38, Jan. 2005. [10] J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, ‘‘Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning,’’ IEEE Trans. Geosci. Remote Sens., vol. 53, no. 6, pp. 3325–3337, Aug. 2015. [11] M. Nageshkumar, P. Mahesh, and M. N. S. Swamy, ‘‘An efficient secure multimodal biometric fusion using palmprint and face image,’’ Int. J. Comput. Sci. Issues, vol. 2, no. 1, pp. 49–53, 2009. [12] S. Khellat-Kihela, R. Abrishambaf, J. L. Monteiro, and M. Benyet- tou, ‘‘Multimodal fusion of the finger vein, fingerprint and the finger- knuckle-print using Kernel Fisher analysis,’’ Appl. Soft Comput., vol. 42, pp. 439–447, May 2016. [13] J. Yang, Z. Zhong, G. Jia, and Y. Li, ‘‘Spatial circular granulation method based on multimodal finger feature,’’ J. Elect. Comput. Eng., vol. 2016, Mar. 2016, Art. no. 7913170. [14] N. Alajlan, M. S. Islam, and N. Ammour, ‘‘Fusion of fingerprint and heartbeat biometrics using fuzzy adaptive genetic algorithm,’’ in Proc. World Congr. Internet Secur. (WorldCIS), Dec. 2013, pp. 76–81. [15] Z. Ma, Y. Lai, W. B. Kleijn, Y.-Z. Song, L. Wang, and J. Guo, ‘‘Variational Bayesian learning for Dirichlet process mixture of inverted Dirichlet dis- tributions in non-Gaussian image feature modeling,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 2, pp. 449–463, Feb. 2019. [16] D. Zhang, J. Han, C. Li, J. Wang, and X. Li, ‘‘Detection of co-salient objects by looking deep and wide,’’ Int. J. Comput. Vis., vol. 120, no. 2, pp. 215–232, 2016. 28614 VOLUME 7, 2019
  • 9. H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics [17] D. Zhang, D. Meng, and J. Han, ‘‘Co-saliency detection via a self- paced multiple-instance learning framework,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 5, pp. 865–878, May 2017. [18] Z. Ma, H. Yu, W. Chen, and J. Guo, ‘‘Short utterance based speech lan- guage identification in intelligent vehicles with time-scale modifications and deep bottleneck features,’’ IEEE Trans. Veh. Technol., vol. 68, no. 1, pp. 121–128, Jan. 2019. [19] L. Hong and A. Jain, ‘‘Integrating faces and fingerprints for personal identification,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12, pp. 1295–1307, Dec. 1998. [20] L. N. Evangelin and A. L. Fred, ‘‘Feature level fusion approach for personal authentication in multimodal biometrics,’’ in Proc. 3rd Int. Conf. Technol. Eng. Manage., Mar. 2017, pp. 148–151. [21] W. Yang, X. Huang, F. Zhou, and Q. Liao, ‘‘Comparative competitive coding for personal identification by using finger vein and finger dorsal texture fusion,’’ Inf. Sci., vol. 268, pp. 20–32, Jun. 2014. [22] J. Peng, Y. Li, R. Li, G. Jia, and J. Yang, ‘‘Multimodal finger feature fusion and recognition based on delaunay triangular granulation,’’ in Pattern Recognition (Communications in Computer and Information Science), vol. 484. Berlin, Germany: Springer, 2014. [23] T. Sim, S. Zhang, R. Janakiraman, and S. Kumar, ‘‘Continuous verification using multimodal biometrics,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 687–700, Apr. 2007. [24] R. Diestel, Graph Theory. Berlin, Germany: Springer-Verlag, 2016. [25] G. Cheng, P. Zhou, and J. Han, ‘‘Learning rotation-invariant convo- lutional neural networks for object detection in VHR optical remote sensing images,’’ IEEE Trans. Geosci. Remote Sens., vol. 54, no. 12, pp. 7405–7415, Dec. 2016. [26] P. Foggia, G. Percannella, and M. Vento, ‘‘Graph matching and learning in pattern recognition in the last 10 years,’’ Int. J. Pattern Recognit. Artif. Intell., vol. 28, no. 1, 2014, Art. no. 1450001. [27] B. Luo, R. C. Wilson and, and E. R. Hancock, ‘‘Spectral embedding of graphs,’’ Pattern Recognit., vol. 36, no. 10, pp. 2213–2230, 2003. [28] E. E. A. Abusham and H. K. Bashir, ‘‘Face recognition using local graph structure (LGS),’’ Lect. Notes Comput. Sci., vol. 6762, no. 1, pp. 169–175, 2011. [29] S. Dong et al., ‘‘Finger Vein Recognition Based on Multi-Orientation Weighted Symmetric Local Graph Structure,’’ KSII Trans. Internet Inf. Syst., vol. 9, no. 10, pp. 4126–4142, 2015. [30] G. Cheung, E. Magli, Y. Tanaka, and M. K. Ng, ‘‘Graph Spectral Image Processing,’’ Proc. IEEE, vol. 106, no. 5, pp. 907–930, May 2018. [31] J. Yang, J. Wei, and Y. Shi, ‘‘Accurate ROI localization and hierarchi- cal hyper-sphere model for finger-vein recognition,’’ Neurocomputing, vol. 328, pp. 171–181, Feb. 2019. [32] Z. Ye, J. Yang, and J. H. Palancar, ‘‘Weighted graph based description for finger-vein recognition,’’ in Proc. China Conf. Biometric Recognitoin, 2017, pp. 370–379. [33] J. Yang, J. Yang, and Y. Shi, ‘‘Finger-vein segmentation based on multi- channel even-symmetric Gabor filters,’’ in Proc. IEEE Int. Conf. Intell. Comput. Intell. Syst., Nov. 2009, pp. 500–503. [34] J. Chen et al., ‘‘WLD: A robust local image descriptor,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1705–1720, Sep. 2010. [35] M. Vlachos and E. Dermatas, ‘‘Vein segmentation in infrared images using compound enhancing and crisp clustering,’’ in Proc. 6th Int. Conf. Comput. Vis. Syst. (ICVS). Santorini, Greece: Springer-Verlag, May 2008, pp. 393–402. [36] C. B. Yu, H. F. Qin, and Y. Z. Cui, ‘‘Finger-vein image recognition combining modified Hausdorff distance with minutiae feature matching,’’ Interdiscipl. Sci. Comput. Life Sci., vol. 1, no. 4, pp. 280–289, 2009. [37] H. G. Zhang, S. Zhang, and Y. Yin, ‘‘Online sequential ELM algorithm with forgetting factor for real applications,’’ Neurocomputing, vol. 261, pp. 144–152, Oct. 2017. [38] C.-W. Hsu and C.-J. Lin, ‘‘A comparison of methods for multiclass support vector machines,’’ IEEE Trans. Neural Netw., vol. 13, no. 2, pp. 415–425, Mar. 2002. [39] A. A. Mohammed and V. Umaashankar, ‘‘Effectiveness of hierarchi- cal softmax in large scale classification tasks,’’ in Proc. Int. Conf. Adv. Comput., Commun. Inform. (ICACCI), Bangalore, India, Sep. 2018, pp. 1090–1094. [40] B. Fan, F. Wu, and Z. Hu, ‘‘Rotationally invariant descriptors using inten- sity order pooling,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 10, pp. 2031–2045, Oct. 2012. [41] Z. Chai, Z. Sun, H. Méndez-Vázquez, R. He, and T. Tan, ‘‘Gabor ordinal measures for face recognition,’’ IEEE Trans. Inf. Forensics Security, vol. 9, no. 1, pp. 14–26, Jan. 2014. [42] Z. Ma, J.-H. Xue, A. Leijon, Z.-H. Tan, Z. Yang, and J. Guo, ‘‘Decorre- lation of neutral vector variables: Theory and applications,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 129–143, Jan. 2018. HAIGANG ZHANG received the B.S. degree from the University of Science and Technology, Liaon- ing, in 2012, and the Ph.D. degree from the School of Automation and Electrical Engineering, Univer- sity of Science and Technology Beijing, in 2018. He is currently with the College of Electronic Information and Automation, Civil Aviation Uni- versity of China. His research interests include machine learning, image process, and biometrics recognition. SHUYI LI received the B.S. degree in electronic information engineering from the Changchun Uni- versity of Technology, in 2016. She is currently pursuing the master’s degree with the Civil Avia- tion University of China. Her main research inter- ests include image process and biometrics recog- nition. YIHUA SHI received the B.Sc. degree from Inner Mongolia Normal University, China, in 1993, and the M.Sc. degree from Zhengzhou Univer- sity, China, in 2006. She is currently an Asso- ciate Professor with the Civil Aviation University of China. Her major research interests include machine learning, pattern recognition, computer vision, multimedia processing, and data mining. JINFENG YANG received the B.Sc. degree from the Zhengzhou University of Light Indus- try, China, in 1994 and the M.Sc. degree from Zhengzhou University, China, in 2001, and the Ph.D. degree in pattern recognition and intelligent system from the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China, in 2005. He is cur- rently a Professor with the Civil Aviation Univer- sity of China. His major research interests include machine learning, pattern recognition, computer vision, multimedia pro- cessing, and data mining. He regularly served as PC Members/Referees for international journals and top conferences, including, IEEE TIP, IVC, PR, PRL, ICCV, CVPR, and ECCV. He currently serves as a syndic of the Chinese Society of Image and Graphics and the Chinese Association for Artificial Intelligence. VOLUME 7, 2019 28615