• Latest submissions
  • By collection
  • HAL documentation

Towards three-dimensional face recognition in the real

Vers une reconnaissance faciale tridimensionnelle dans le réel.

  • UL2 - Université Lumière - Lyon 2 (86, rue Pasteur - 69007 Lyon - France) 33804
  • Université de Lyon (92 rue Pasteur - CS 30122, 69361 Lyon Cedex 07 - France) 301088
  • INSA - Institut National des Sciences Appliquées (France) 301232
  • CNRS - Centre National de la Recherche Scientifique : UMR5205 (France) 441569
  • Function : Author

Vignette du fichier

Origin Version validated by the jury (STAR)

ABES STAR  :  Contact

https://theses.hal.science/tel-00998798

Submitted on : Monday, June 2, 2014-4:47:21 PM

Last modification on : Tuesday, July 16, 2024-4:06:51 AM

Long-term archiving on: Tuesday, September 2, 2014-12:50:15 PM

Dates and versions

Identifiers.

  • HAL Id : tel-00998798 , version 1

Collections

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sensors-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • PubMed/Medline
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Face recognition systems: a survey.

face recognition phd thesis

1. Introduction

  • We first introduced face recognition as a biometric technique.
  • We presented the state of the art of the existing face recognition techniques classified into three approaches: local, holistic, and hybrid.
  • The surveyed approaches were summarized and compared under different conditions.
  • We presented the most popular face databases used to test these approaches.
  • We highlighted some new promising research directions.

2. Face Recognition Systems Survey

2.1. essential steps of face recognition systems.

  • Face Detection : The face recognition system begins first with the localization of the human faces in a particular image. The purpose of this step is to determine if the input image contains human faces or not. The variations of illumination and facial expression can prevent proper face detection. In order to facilitate the design of a further face recognition system and make it more robust, pre-processing steps are performed. Many techniques are used to detect and locate the human face image, for example, Viola–Jones detector [ 24 , 25 ], histogram of oriented gradient (HOG) [ 13 , 26 ], and principal component analysis (PCA) [ 27 , 28 ]. Also, the face detection step can be used for video and image classification, object detection [ 29 ], region-of-interest detection [ 30 ], and so on.
  • Feature Extraction : The main function of this step is to extract the features of the face images detected in the detection step. This step represents a face with a set of features vector called a “signature” that describes the prominent features of the face image such as mouth, nose, and eyes with their geometry distribution [ 31 , 32 ]. Each face is characterized by its structure, size, and shape, which allow it to be identified. Several techniques involve extracting the shape of the mouth, eyes, or nose to identify the face using the size and distance [ 3 ]. HOG [ 33 ], Eigenface [ 34 ], independent component analysis (ICA), linear discriminant analysis (LDA) [ 27 , 35 ], scale-invariant feature transform (SIFT) [ 23 ], gabor filter, local phase quantization (LPQ) [ 36 ], Haar wavelets, Fourier transforms [ 31 ], and local binary pattern (LBP) [ 3 , 10 ] techniques are widely used to extract the face features.
  • Face Recognition : This step considers the features extracted from the background during the feature extraction step and compares it with known faces stored in a specific database. There are two general applications of face recognition, one is called identification and another one is called verification. During the identification step, a test face is compared with a set of faces aiming to find the most likely match. During the identification step, a test face is compared with a known face in the database in order to make the acceptance or rejection decision [ 7 , 19 ]. Correlation filters (CFs) [ 18 , 37 , 38 ], convolutional neural network (CNN) [ 39 ], and also k-nearest neighbor (K-NN) [ 40 ] are known to effectively address this task.

2.2. Classification of Face Recognition Systems

3. local approaches, 3.1. local appearance-based techniques.

  • Local binary pattern (LBP) and it’s variant: LBP is a great general texture technique used to extract features from any object [ 16 ]. It has widely performed in many applications such as face recognition [ 3 ], facial expression recognition, texture segmentation, and texture classification. The LBP technique first divides the facial image into spatial arrays. Next, within each array square, a 3 × 3 pixel matrix ( p 1 … … p 8 ) is mapped across the square. The pixel of this matrix is a threshold with the value of the center pixel ( p 0 ) (i.e., use the intensity value of the center pixel i ( p 0 ) as a reference for thresholding) to produce the binary code. If a neighbor pixel’s value is lower than the center pixel value, it is given a zero; otherwise, it is given one. The binary code contains information about the local texture. Finally, for each array square, a histogram of these codes is built, and the histograms are concatenated to form the feature vector. The LBP is defined in a matrix of size 3 × 3, as shown in Equation (1). LBP = ∑ p = 1 8 2 p s ( i 0 − i p ) ,      w i t h   s ( x ) = { 1 x ≥ 0 0 x < 0 , (1) where i 0 and i p are the intensity value of the center pixel and neighborhood pixels, respectively. Figure 3 illustrates the procedure of the LBP technique. Khoi et al. [ 20 ] propose a fast face recognition system based on LBP, pyramid of local binary pattern (PLBP), and rotation invariant local binary pattern (RI-LBP). Xi et al. [ 15 ] have introduced a new unsupervised deep learning-based technique, called local binary pattern network (LBPNet), to extract hierarchical representations of data. The LBPNet maintains the same topology as the convolutional neural network (CNN). The experimental results obtained using the public benchmarks (i.e., LFW and FERET) have shown that LBPNet is comparable to other unsupervised techniques. Laure et al. [ 40 ] have implemented a method that helps to solve face recognition issues with large variations of parameters such as expression, illumination, and different poses. This method is based on two techniques: LBP and K-NN techniques. Owing to its invariance to the rotation of the target image, LBP become one of the important techniques used for face recognition. Bonnen et al. [ 42 ] proposed a variant of the LBP technique named “multiscale local binary pattern (MLBP)” for features’ extraction. Another LBP extension is the local ternary pattern (LTP) technique [ 43 ], which is less sensitive to the noise than the original LBP technique. This technique uses three steps to compute the differences between the neighboring ones and the central pixel. Hussain et al. [ 36 ] develop a local quantized pattern (LQP) technique for face representation. LQP is a generalization of local pattern features and is intrinsically robust to illumination conditions. The LQP features use the disk layout to sample pixels from the local neighborhood and obtain a pair of binary codes using ternary split coding. These codes are quantized, with each one using a separately learned codebook.
  • Histogram of oriented gradients (HOG) [ 44 ]: The HOG is one of the best descriptors used for shape and edge description. The HOG technique can describe the face shape using the distribution of edge direction or light intensity gradient. The process of this technique done by sharing the whole face image into cells (small region or area); a histogram of pixel edge direction or direction gradients is generated of each cell; and, finally, the histograms of the whole cells are combined to extract the feature of the face image. The feature vector computation by the HOG descriptor proceeds as follows [ 10 , 13 , 26 , 45 ]: firstly, divide the local image into regions called cells, and then calculate the amplitude of the first-order gradients of each cell in both the horizontal and vertical direction. The most common method is to apply a 1D mask, [–1 0 1]. G x ( x ,   y ) = I ( x + 1 ,   y ) − I ( x − 1 ,   y ) , (2) G y ( x ,   y ) = I ( x ,   y + 1 ) − I ( x ,   y − 1 ) , (3) where I ( x ,   y ) is the pixel value of the point ( x ,   y ) and G x ( x ,   y ) and G y ( x ,   y ) denote the horizontal gradient amplitude and the vertical gradient amplitude, respectively. The magnitude of the gradient and the orientation of each pixel ( x , y ) are computed as follows: G ( x ,   y ) = G x ( x ,   y ) 2 + G y ( x ,   y ) 2 , (4) θ ( x ,   y ) = tan − 1 ( G y ( x ,   y ) G x ( x ,   y ) ) . (5) The magnitude of the gradient and the orientation of each pixel in the cell are voted in nine bins with the tri-linear interpolation. The histograms of each cell are generated pixel based on direction gradients and, finally, the histograms of the whole cells are combined to extract the feature of the face image. Karaaba et al. [ 44 ] proposed a combination of different histograms of oriented gradients (HOG) to perform a robust face recognition system. This technique is named “multi-HOG”. The authors create a vector of distances between the target and the reference face images for identification. Arigbabu et al. [ 46 ] proposed a novel face recognition system based on the Laplacian filter and the pyramid histogram of gradient (PHOG) descriptor. In addition, to investigate the face recognition problem, support vector machine (SVM) is used with different kernel functions.
  • Correlation filters: Face recognition systems based on the correlation filter (CF) have given good results in terms of robustness, location accuracy, efficiency, and discrimination. In the field of facial recognition, the correlation techniques have attracted great interest since the first use of an optical correlator [ 47 ]. These techniques provide the following advantages: high ability for discrimination, desired noise robustness, shift-invariance, and inherent parallelism. On the basis of these advantages, many optoelectronic hybrid solutions of correlation filters (CFs) have been introduced such as the joint transform correlator (JTC) [ 48 ] and VanderLugt correlator (VLC) [ 47 ] techniques. The purpose of these techniques is to calculate the degree of similarity between target and reference images. The decision is taken by the detection of a correlation peak. Both techniques (VLC and JTC) are based on the “ 4 f ” optical configuration [ 37 ]. This configuration is created by two convergent lenses ( Figure 4 ). The face image F is processed by the fast Fourier transform (FFT) based on the first lens in the Fourier plane S F . In this Fourier plane, a specific filter P is applied (for example, the phase-only filter (POF) filter [ 2 ]) using optoelectronic interfaces. Finally, to obtain the filtered face image F ′ (or the correlation plane), the inverse FFT (IFFT) is made with the second lens in the output plane. For example, the VLC technique is done by two cascade Fourier transform structures realized by two lenses [ 4 ], as presented in Figure 5 . The VLC technique is presented as follows: firstly, a 2D-FFT is applied to the target image to get a target spectrum S . After that, a multiplication between the target spectrum and the filter obtain with the 2D-FFT of a reference image is affected, and this result is placed in the Fourier plane. Next, it provides the correlation result recorded on the correlation plane, where this multiplication is affected by inverse FF. The correlation result, described by the peak intensity, is used to determine the similarity degree between the target and reference images. C = F F T − 1 { S ∗ ∘ P O F } , (6) where F F T − 1 stands for the inverse fast FT (FFT) operation, * represents the conjugate operation, and ∘ denotes the element-wise array multiplication. To enhance the matching process, Horner and Gianino [ 49 ] proposed a phase-only filter (POF). The POF filter can produce correlation peaks marked with enhanced discrimination capability. The POF is an optimized filter defined as follows: H P O F ( u , v ) = S ∗ ( u , v ) | S ( u , v ) | , (7) where S ∗ ( u , v ) is the complex conjugate of the 2D-FFT of the reference image. To evaluate the decision, the peak to correlation energy (PCE) is defined as the energy in the correlation peaks’ intensity normalized to the overall energy of the correlation plane. P C E = ∑ i , j N E p e a k ( i , j ) ∑ i , j M E c o r r e l a t i o n − p l a n e ( i , j ) , (8) where i , j are the coefficient coordinates; M and N are the size of the correlation plane and the size of the peak correlation spot, respectively; E p e a k is the energy in the correlation peaks; and E c o r r e l a t i o n − p l a n e is the overall energy of the correlation plane. Correlation techniques are widely applied in recognition and identification applications [ 4 , 37 , 50 , 51 , 52 , 53 ]. For example, in the work of [ 4 ], the authors presented the efficiency performances of the VLC technique based on the “4f” configuration for identification using GPU Nvidia Geforce 8400 GS. The POF filter is used for the decision. Another important work in this area of research is presented by Leonard et al. [ 50 ], which presented good performance and the simplicity of the correlation filters for the field of face recognition. In addition, many specific filters such as POF, BPOF, Ad, IF, and so on are used to select the best filter based on its sensitivity to the rotation, scale, and noise. Napoléon et al. [ 3 ] introduced a novel system for identification and verification fields based on an optimized 3D modeling under different illumination conditions, which allows reconstructing faces in different poses. In particular, to deform the synthetic model, an active shape model for detecting a set of key points on the face is proposed in Figure 6 . The VanderLugt correlator is proposed to perform the identification and the LBP descriptor is used to optimize the performances of a correlation technique under different illumination conditions. The experiments are performed on the Pointing Head Pose Image Database (PHPID) database with an elevation ranging from −30° to +30°.

3.2. Key-Points-Based Techniques

  • Scale invariant feature transform (SIFT) [ 56 , 57 ]: SIFT is an algorithm used to detect and describe the local features of an image. This algorithm is widely used to link two images by their local descriptors, which contain information to make a match between them. The main idea of the SIFT descriptor is to convert the image into a representation composed of points of interest. These points contain the characteristic information of the face image. SIFT presents invariance to scale and rotation. It is commonly used today and fast, which is essential in real-time applications, but one of its disadvantages is the time of matching of the critical points. The algorithm is realized in four steps: (1) detection of the maximum and minimum points in the space-scale, (2) location of characteristic points, (3) assignment of orientation, and (4) a descriptor of the characteristic point. A framework to detect the key-points based on the SIFT descriptor was proposed by L. Lenc et al. [ 56 ], where they use the SIFT technique in combination with a Kepenekci approach for the face recognition.
  • Speeded-up robust features (SURF) [ 29 , 57 ]: the SURF technique is inspired by SIFT, but uses wavelets and an approximation of the Hessian determinant to achieve better performance [ 29 ]. SURF is a detector and descriptor that claims to achieve the same, or even better, results in terms of repeatability, distinction, and robustness compared with the SIFT descriptor. The main advantage of SURF is the execution time, which is less than that used by the SIFT descriptor. Besides, the SIFT descriptor is more adapted to describe faces affected by illumination conditions, scaling, translation, and rotation [ 57 ]. To detect feature points, SURF seeks to find the maximum of an approximation of the Hessian matrix using integral images to dramatically reduce the processing computational time. Figure 7 shows an example of SURF descriptor for face recognition using AR face datasets [ 58 ].
  • Binary robust independent elementary features (BRIEF) [ 30 , 57 ]: BRIEF is a binary descriptor that is simple and fast to compute. This descriptor is based on the differences between the pixel intensity that are similar to the family of binary descriptors such as binary robust invariant scalable (BRISK) and fast retina keypoint (FREAK) in terms of evaluation. To reduce noise, the BRIEF descriptor smoothens the image patches. After that, the differences between the pixel intensity are used to represent the descriptor. This descriptor has achieved the best performance and accuracy in pattern recognition.
  • Fast retina keypoint (FREAK) [ 57 , 59 ]: the FREAK descriptor proposed by Alahi et al. [ 59 ] uses a retinal sampling circular grid. This descriptor uses 43 sampling patterns based on retinal receptive fields that are shown in Figure 8 . To extract a binary descriptor, these 43 receptive fields are sampled by decreasing factors as the distance from the thousand potential pairs to a patch’s center yields. Each pair is smoothed with Gaussian functions. Finally, the binary descriptors are represented by setting a threshold and considering the sign of differences between pairs.

3.3. Summary of Local Approaches

4. holistic approach, 4.1. linear techniques.

  • Eigenface [ 34 ] and principal component analysis (PCA) [ 27 , 62 ]: Eigenfaces is one of the popular methods of holistic approaches used to extract features points of the face image. This approach is based on the principal component analysis (PCA) technique. The principal components created by the PCA technique are used as Eigenfaces or face templates. The PCA technique transforms a number of possibly correlated variables into a small number of incorrect variables called “principal components”. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. Figure 9 shows how the face can be represented by a small number of features. PCA calculates the Eigenvectors of the covariance matrix, and projects the original data onto a lower dimensional feature space, which are defined by Eigenvectors with large Eigenvalues. PCA has been used in face representation and recognition, where the Eigenvectors calculated are referred to as Eigenfaces (as shown in Figure 10 ). An image may also be considering the vector of dimension M × N , so that a typical image of size 4 × 4 becomes a vector of dimension 16. Let the training set of images be { X 1 , X 2 ,   X 3 …   X N } . The average face of the set is defined by the following: X ¯ = 1 N ∑ i = 1 N X   i . (9) Calculate the estimate covariance matrix to represent the scatter degree of all feature vectors related to the average vector. The covariance matrix Q is defined by the following: Q = 1 N ∑ i = 1 N ( X ¯ − X i ) ( X ¯ − X i ) T . (10) The Eigenvectors and corresponding Eigen-values are computed using C V = λ V ,       ( V ϵ R n ,   V ≠ 0 ) , (11) where V is the set of eigenvectors matrix Q associated with its eigenvalue λ . Project all the training images of i t h person to the corresponding Eigen-subspace: y k i = w T    ( x i ) ,       ( i = 1 ,   2 ,   3   …   N ) , (12) where the y k i are the projections of x and are called the principal components, also known as eigenfaces. The face images are represented as a linear combination of these vectors’ “principal components”. In order to extract facial features, PCA and LDA are two different feature extraction algorithms that are used. Wavelet fusion and neural networks are applied to classify facial features. The ORL database is used for evaluation. Figure 10 shows the first five Eigenfaces constructed from the ORL database [ 63 ].
  • Fisherface and linear discriminative analysis (LDA) [ 64 , 65 ]: The Fisherface method is based on the same principle of similarity as the Eigenfaces method. The objective of this method is to reduce the high dimensional image space based on the linear discriminant analysis (LDA) technique instead of the PCA technique. The LDA technique is commonly used for dimensionality reduction and face recognition [ 66 ]. PCA is an unsupervised technique, while LDA is a supervised learning technique and uses the data information. For all samples of all classes, the within-class scatter matrix S W and the between-class scatter matrix S B are defined as follows: S B = ∑ I = 1 C M i ( x i − μ ) ( x i − μ ) T , (13) S w = ∑ I = 1 C ∑ x k ϵ X i M i ( x k − μ ) ( x k − μ ) T , (14) where μ is the mean vector of samples belonging to class i , X i represents the set of samples belonging to class i with x k being the number image of that class, c is the number of distinct classes, and M i is the number of training samples in class i . S B describes the scatter of features around the overall mean for all face classes and S w describes the scatter of features around the mean of each face class. The goal is to maximize the ratio d e t | S B | / d e t | S w |, in other words, minimizing S w while maximiz ing   S B . Figure 11 shows the first five Eigenfaces and Fisherfaces obtained from the ORL database [ 63 ].
  • Independent component analysis (ICA) [ 35 ]: The ICA technique is used for the calculation of the basic vectors of a given space. The goal of this technique is to perform a linear transformation in order to reduce the statistical dependence between the different basic vectors, which allows the analysis of independent components. It is determined that they are not orthogonal to each other. In addition, the acquisition of images from different sources is sought in uncorrelated variables, which makes it possible to obtain greater efficiency, because ICA acquires images within statistically independent variables.
  • Improvements of the PCA, LDA, and ICA techniques: To improve the linear subspace techniques, many types of research are developed. Z. Cui et al. [ 67 ] proposed a new spatial face region descriptor (SFRD) method to extract the face region, and to deal with noise variation. This method is described as follows: divide each face image in many spatial regions, and extract token-frequency (TF) features from each region by sum-pooling the reconstruction coefficients over the patches within each region. Finally, extract the SFRD for face images by applying a variant of the PCA technique called “whitened principal component analysis (WPCA)” to reduce the feature dimension and remove the noise in the leading eigenvectors. Besides, the authors in [ 68 ] proposed a variant of the LDA called probabilistic linear discriminant analysis (PLDA) to seek directions in space that have maximum discriminability, and are hence most suitable for both face recognition and frontal face recognition under varying pose.
  • Gabor filters: Gabor filters are spatial sinusoids located by a Gaussian window that allows for extracting the features from images by selecting their frequency, orientation, and scale. To enhance the performance under unconstrained environments for face recognition, Gabor filters are transformed according to the shape and pose to extract the feature vectors of face image combined with the PCA in the work of [ 69 ]. The PCA is applied to the Gabor features to remove the redundancies and to get the best face images description. Finally, the cosine metric is used to evaluate the similarity.
  • Frequency domain analysis [ 70 , 71 ]: Finally, the analysis techniques in the frequency domain offer a representation of the human face as a function of low-frequency components that present high energy. The discrete Fourier transform (DFT), discrete cosine transform (DCT), or discrete wavelet transform (DWT) techniques are independent of the data, and thus do not require training.
  • Discrete wavelet transform (DWT): Another linear technique used for face recognition. In the work of [ 70 ], the authors used a two-dimensional discrete wavelet transform (2D-DWT) method for face recognition using a new patch strategy. A non-uniform patch strategy for the top-level’s low-frequency sub-band is proposed by using an integral projection technique for two top-level high-frequency sub-bands of 2D-DWT based on the average image of all training samples. This patch strategy is better for retaining the integrity of local information, and is more suitable to reflect the structure feature of the face image. When constructing the patching strategy using the testing and training samples, the decision is performed using the neighbor classifier. Many databases are used to evaluate this method, including Labeled Faces in Wild (LFW), Extended Yale B, Face Recognition Technology (FERET), and AR.
  • Discrete cosine transform (DCT) [ 71 ] can be used for global and local face recognition systems. DCT is a transformation that represents a finite sequence of data as the sum of a series of cosine functions oscillating at different frequencies. This technique is widely used in face recognition systems [ 71 ], from audio and image compression to spectral methods for the numerical resolution of differential equations. The required steps to implement the DCT technique are presented as follows.
DCT Algorithm
      where , and        

4.2. Nonlinear Techniques

Kernel PCA Algorithm
using kernel function: . and normalize with the function: . using kernel function:
  • Kernel linear discriminant analysis (KDA) [ 73 ]: the KLDA technique is a kernel extension of the linear LDA technique, in the same kernel extension of PCA. Arashloo et al. [ 73 ] proposed a nonlinear binary class-specific kernel discriminant analysis classifier (CS-KDA) based on the spectral regression kernel discriminant analysis. Other nonlinear techniques have also been used in the context of facial recognition:
  • Gabor-KLDA [ 74 ].
  • Evolutionary weighted principal component analysis (EWPCA) [ 75 ].
  • Kernelized maximum average margin criterion (KMAMC), SVM, and kernel Fisher discriminant analysis (KFD) [ 76 ].
  • Wavelet transform (WT), radon transform (RT), and cellular neural networks (CNN) [ 77 ].
  • Joint transform correlator-based two-layer neural network [ 78 ].
  • Kernel Fisher discriminant analysis (KFD) and KPCA [ 79 ].
  • Locally linear embedding (LLE) and LDA [ 80 ].
  • Nonlinear locality preserving with deep networks [ 81 ].
  • Nonlinear DCT and kernel discriminative common vector (KDCV) [ 82 ].

4.3. Summary of Holistic Approaches

5. hybrid approach, 5.1. technique presentation.

  • Gabor wavelet and linear discriminant analysis (GW-LDA) [ 91 ]: Fathima et al. [ 91 ] proposed a hybrid approach combining Gabor wavelet and linear discriminant analysis (HGWLDA) for face recognition. The grayscale face image is approximated and reduced in dimension. The authors have convolved the grayscale face image with a bank of Gabor filters with varying orientations and scales. After that, a subspace technique 2D-LDA is used to maximize the inter-class space and reduce the intra-class space. To classify and recognize the test face image, the k-nearest neighbour (k-NN) classifier is used. The recognition task is done by comparing the test face image feature with each of the training set features. The experimental results show the robustness of this approach in different lighting conditions.
  • Over-complete LBP (OCLBP), LDA, and within class covariance normalization (WCCN): Barkan et al. [ 92 ] proposed a new representation of face image based over-complete LBP (OCLBP). This representation is a multi-scale modified version of the LBP technique. The LDA technique is performed to reduce the high dimensionality representations. Finally, the within class covariance normalization (WCCN) is the metric learning technique used for face recognition.
  • Advanced correlation filters and Walsh LBP (WLBP): Juefei et al. [ 93 ] implemented a single-sample periocular-based alignment-robust face recognition technique based on high-dimensional Walsh LBP (WLBP). This technique utilizes only one sample per subject class and generates new face images under a wide range of 3D rotations using the 3D generic elastic model, which is both accurate and computationally inexpensive. The LFW database is used for evaluation, and the proposed method outperformed the state-of-the-art algorithms under four evaluation protocols with a high accuracy of 89.69%.
  • Multi-sub-region-based correlation filter bank (MS-CFB): Yan et al. [ 94 ] propose an effective feature extraction technique for robust face recognition, named multi-sub-region-based correlation filter bank (MS-CFB). MS-CFB extracts the local features independently for each face sub-region. After that, the different face sub-regions are concatenated to give optimal overall correlation outputs. This technique reduces the complexity, achieves higher recognition rates, and provides a better feature representation for recognition compared with several state-of-the-art techniques on various public face databases.
  • SIFT features, Fisher vectors, and PCA: Simonyan et al. [ 64 ] have developed a novel method for face recognition based on the SIFT descriptor and Fisher vectors. The authors propose a discriminative dimensionality reduction owing to the high dimensionality of the Fisher vectors. After that, these vectors are projected into a low dimensional subspace with a linear projection. The objective of this methodology is to describe the image based on dense SIFT features and Fisher vectors encoding to achieve high performance on the challenging LFW dataset in both restricted and unrestricted settings.
  • CNNs and stacked auto-encoder (SAE) techniques: Ding et al. [ 95 ] proposed multimodal deep face representation (MM-DFR) framework based on convolutional neural networks (CNNs) technique from the original holistic face image, rendered frontal face by 3D face model (stand for holistic facial features and local facial features, respectively), and uniformly sampled image patches. The proposed MM-DFR framework has two steps: a CNNs technique is used to extract the features and a three-layer stacked auto-encoder (SAE) technique is employed to compress the high-dimensional deep feature into a compact face signature. The LFW database is used to evaluate the identification performance of MM-DFR. The flowchart of the proposed MM-DFR framework is shown in Figure 12 .
  • PCA and ANFIS: Sharma et al. [ 96 ] propose an efficient pose-invariant face recognition system based on PCA technique and ANFIS classifier. The PCA technique is employed to extract the features of an image, and the ANFIS classifier is developed for identification under a variety of pose conditions. The performance of the proposed system based on PCA–ANFIS is better than ICA–ANFIS and LDA–ANFIS for the face recognition task. The ORL database is used for evaluation.
  • DCT and PCA: Ojala et al. [ 97 ] develop a fast face recognition system based on DCT and PCA techniques. Genetic algorithm (GA) technique is used to extract facial features, which allows to remove irrelevant features and reduces the number of features. In addition, the DCT–PCA technique is used to extract the features and reduce the dimensionality. The minimum Euclidian distance (ED) as a measurement is used for the decision. Various face databases are used to demonstrate the effectiveness of this system.
  • PCA, SIFT, and iterative closest point (ICP): Mian et al. [ 98 ] present a multimodal (2D and 3D) face recognition system based on hybrid matching to achieve efficiency and robustness to facial expressions. The Hotelling transform is performed to automatically correct the pose of a 3D face using its texture. After that, in order to form a rejection classifier, a novel 3D spherical face representation (SFR) in conjunction with the SIFT descriptor is used, which provide efficient recognition in the case of large galleries by eliminating a large number of candidates’ faces. A modified iterative closest point (ICP) algorithm is used for the decision. This system is less sensitive and robust to facial expressions, which achieved a 98.6% verification rate and 96.1% identification rate on the complete FRGC v2 database.
  • PCA, local Gabor binary pattern histogram sequence (LGBPHS), and GABOR wavelets: Cho et al. [ 99 ] proposed a computationally efficient hybrid face recognition system that employs both holistic and local features. The PCA technique is used to reduce the dimensionality. After that, the local Gabor binary pattern histogram sequence (LGBPHS) technique is employed to realize the recognition stage, which proposed to reduce the complexity caused by the Gabor filters. The experimental results show a better recognition rate compared with the PCA and Gabor wavelet techniques under illumination variations. The Extended Yale Face Database B is used to demonstrate the effectiveness of this system.
  • PCA and Fisher linear discriminant (FLD) [ 100 , 101 ]: Sing et al. [ 101 ] propose a novel hybrid technique for face representation and recognition, which exploits both local and subspace features. In order to extract the local features, the whole image is divided into a sub-regions, while the global features are extracted directly from the whole image. After that, PCA and Fisher linear discriminant (FLD) techniques are introduced on the fused feature vector to reduce the dimensionality. The CMU-PIE, FERET, and AR face databases are used for the evaluation.
  • SPCA–KNN [ 102 ]: Kamencay et al. [ 102 ] develop a new face recognition method based on SIFT features, as well as PCA and KNN techniques. The Hessian–Laplace detector along with SPCA descriptor is performed to extract the local features. SPCA is introduced to identify the human face. KNN classifier is introduced to identify the closest human faces from the trained features. The results of the experiment have a recognition rate of 92% for the unsegmented ESSEX database and 96% for the segmented database (700 training images).
  • Convolution operations, LSTM recurrent units, and ELM classifier [ 103 ]: Sun et al. [ 103 ] propose a hybrid deep structure called CNN–LSTM–ELM in order to achieve sequential human activity recognition (HAR). Their proposed CNN–LSTM–ELM structure is evaluated using the OPPORTUNITY dataset, which contains 46,495 training samples and 9894 testing samples, and each sample is a sequence. The model training and testing runs on a GPU with 1536 cores, 1050 MHz clock speed, and 8 GB RAM. The flowchart of the proposed CNN–LSTM–ELM structure is shown in Figure 13 [ 103 ].

5.2. Summary of Hybrid Approaches

6. assessment of face recognition approaches, 6.1. measures of similarity or distances.

  • Peak-to-correlation energy (PCE) or peak-to-sidelobe ratio (PSR) [ 18 ]: The PCE was introduced in (8).
  • Euclidean distance [ 54 ]: The Euclidean distance is one of the most basic measures used to compute the direct distance between two points in a plane. If we have two points P 1 and P 2 , with the coordinates ( x 1 ,   y 1 ) and ( x 2 ,   y 2 ) , respectively, the calculation of the Euclidean distance between them would be as follows: d E ( P 1 ,   P 2   ) = ( x 2 − x 1 ) 2 + ( y 2 − y 1 ) 2 . (15) In general, the Euclidean distance between two points P = ( 1 ,   p 2 ,   … ,   p n ) and Q = ( q 1 ,   q 2 , …   ,   q n ) in the n-dimensional space would be defined by the following: d E ( P , Q ) = ∑ i n ( p i − q i ) 2 . (16)
  • Bhattacharyya distance [ 104 , 105 ]: The Bhattacharyya distance is a statistical measure that quantifies the similarity between two discrete or continuous probability distributions. This distance is particularly known for its low processing time and its low sensitivity to noise. For the probability distributions p and q defined on the same domain, the distance of Bhattacharyya is defined as follows: D B ( p ,   q ) = − l n ( B C ( p ,   q ) ) , (17) B C ( p ,   q ) = ∑ x ∈ X p ( x ) q ( x )   ( a ) ;   B C ( p ,   q ) = ∫ p ( x ) q ( x ) d x   ( b ) , (18) where B C is the Bhattacharyya coefficient, defined as Equation (18a) for discrete probability distributions and as Equation (18b) for continuous probability distributions. In both cases, 0 ≤ BC ≤ 1 and 0 ≤ DB ≤ ∞. In its simplest formulation, the Bhattacharyya distance between two classes that follow a normal distribution can be calculated from a mean ( μ ) and the variance ( σ 2 ): D B ( p ,   q ) = 1 4 l n ( 1 4 ( σ p 2 σ q 2 + σ q 2 σ p 2 + 2 ) ) + 1 4 ( ( μ p − μ q ) σ q 2 + σ p 2 ) . (19)
  • Chi-squared distance [ 106 ]: The Chi-squared ( X 2 ) distance was weighted by the value of the samples, which allows knowing the same relevance for sample differences with few occurrences as those with multiple occurrences. To compare two histograms S 1 = ( u 1 , …   …   … . u m ) and S 2 = ( w 1 , …   …   … . w m ) , the Chi-squared ( X 2 ) distance can be defined as follows: ( X 2 ) = D ( S 1 , S 2 ) = 1 2 ∑ i = 1 m ( u i − w i ) 2 u i + w i . (20)

6.2. Classifiers

  • Support vector machines (SVMs) [ 13 , 26 ]: The feature vectors extracted by any descriptor are classified by linear or nonlinear SVM. The SVM classifier may realize the separation of the classes with an optimal hyperplane. To determine the last, only the closest points of the total learning set should be used; these points are called support vectors ( Figure 14 ). There is an infinite number of hyperplanes capable of perfectly separating two classes, which implies to select a hyperplane that maximizes the minimal distance between the learning examples and the learning hyperplane (i.e., the distance between the support vectors and the hyperplane). This distance is called “margin”. The SVM classifier is used to calculate the optimal hyperplane that categorizes a set of labels training data in the correct class. The optimal hyperplane is solved as follows: D = { ( x i , y i ) | x i ∈ R n ,   y i ∈ { − 1 , 1 } ,   i = 1 … … l } . (21) Given that x i are the training features vectors and y i are the corresponding set of l (1 or −1) labels. An SVM tries to find a hyperplane to distinguish the samples with the smallest errors. The classification function is obtained by calculating the distance between the input vector and the hyperplane. w x i − b = C f , (22) where w and b are the parameters of the model. Shen et al. [ 108 ] proposed the Gabor filter to extract the face features and applied the SVM for classification. The proposed FaceNet method achieves a good record accuracy of 99.63% and 95.12% using the LFW YouTube Faces DB datasets, respectively.
  • k-nearest neighbor (k-NN) [ 17 , 91 ]: k-NN is an indolent algorithm because, in training, it saves little information, and thus does not build models of difference, for example, decision trees.
  • K-means [ 9 , 109 ]: It is called K-means because it represents each of the groups by the average (or weighted average) of its points, called the centroid. In the K-means algorithm, it is necessary to specify a priori the number of clusters k that one wishes to form in order to start the process.
  • Deep learning (DL): An automatic learning technique that uses neural network architectures. The term “deep” refers to the number of hidden layers in the neural network. While conventional neural networks have one layer, deep neural networks (DNN) contain several layers, as presented in Figure 15 .
  • Convolutional layer : sometimes called the feature extractor layer because features of the image are extracted within this layer. Convolution preserves the spatial relationship between pixels by learning image features using small squares of the input image. The input image is convoluted by employing a set of learnable neurons. This produces a feature map or activation map in the output image, after which the feature maps are fed as input data to the next convolutional layer. The convolutional layer also contains rectified linear unit (ReLU) activation to convert all negative value to zero. This makes it very computationally efficient, as few neurons are activated each time.
  • Pooling layer: used to reduce dimensions, with the aim of reducing processing times by retaining the most important information after convolution. This layer basically reduces the number of parameters and computation in the network, controlling over fitting by progressively reducing the spatial size of the network. There are two operations in this layer: average pooling and maximum pooling: - Average-pooling takes all the elements of the sub-matrix, calculates their average, and stores the value in the output matrix. - Max-pooling searches for the highest value found in the sub-matrix and saves it in the output matrix.
  • Fully-connected layer : in this layer, the neurons have a complete connection to all the activations from the previous layers. It connects neurons in one layer to neurons in another layer. It is used to classify images between different categories by training.

6.3. Databases Used

  • LFW (Labeled Faces in the Wild) database was created in October 2007. It contains 13,333 images of 5749 subjects, with 1680 subjects with at least two images and the rest with a single image. These face images were taken on the Internet, pre-processed, and localized by the Viola–Jones detector with a resolution of 250 × 250 pixels. Most of them are in color, although there are also some in grayscale and presented in JPG format and organized by folders.
  • FERET (Face Recognition Technology) database was created in 15 sessions in a semi-controlled environment between August 1993 and July 1996. It contains 1564 sets of images, with a total of 14,126 images. The duplicate series belong to subjects already present in the series of individual images, which were generally captured one day apart. Some images taken from the same subject vary overtime for a few years and can be used to treat facial changes that appear over time. The images have a depth of 24 bits, RGB, so they are color images, with a resolution of 512 × 768 pixels.
  • AR face database was created by Aleix Martínez and Robert Benavente in the computer vision center (CVC) of the Autonomous University of Barcelona in June 1998. It contains more than 4000 images of 126 subjects, including 70 men and 56 women. They were taken at the CVC under a controlled environment. The images were taken frontally to the subjects, with different facial expressions and three different lighting conditions, as well as several accessories: scarves, glasses, or sunglasses. Two imaging sessions were performed with the same subjects, 14 days apart. These images are a resolution of 576 × 768 pixels and a depth of 24 bits, under the RGB RAW format.
  • ORL Database of Faces was performed between April 1992 and April 1994 at the AT & T laboratory in Cambridge. It consists of a total of 10 images per subject, out of a total of 40 images. For some subjects, the images were taken at different times, with varying illumination and facial expressions: eyes open/closed, smiling/without a smile, as well as with or without glasses. The images were taken under a black homogeneous background, in a vertical position and frontally to the subject, with some small rotation. These are images with a resolution of 92 × 112 pixels in grayscale.
  • Extended Yale Face B database contains 16,128 images of 640 × 480 grayscale of 28 individuals under 9 poses and 64 different lighting conditions. It also includes a set of images made with the face of individuals only.
  • Pointing Head Pose Image Database (PHPID) is one of the most widely used for face recognition. It contains 2790 monocular face images of 15 persons with tilt angles from −90° to +90° and variations of pan. Every person has two series of 93 different poses (93 images). The face images were taken under different skin color and with or without glasses.

6.4. Comparison between Holistic, Local, and Hybrid Techniques

7. discussion about future directions and conclusions, 7.1. discussion.

  • Local approaches: use features in which the face described partially. For example, some system could consist of extracting local features such as the eyes, mouth, and nose. The features’ values are calculated from the lines or points that can be represented on the face image for the recognition step.
  • Holistic approaches: use features that globally describe the complete face as a model, including the background (although it is desirable to occupy the smallest possible surface).
  • Hybrid approaches: combine local and holistic approaches.
  • Three-dimensional face recognition: In 2D image-based techniques, some features are lost owing to the 3D structure of the face. Lighting and pose variations are two major unresolved problems of 2D face recognition. Recently, 3D facial recognition for facial recognition has been widely studied by the scientific community to overcome unresolved problems in 2D facial recognition and to achieve significantly higher accuracy by measuring geometry of rigid features on the face. For this reason, several recent systems based on 3D data have been developed [ 3 , 93 , 95 , 128 , 129 ].
  • Multimodal facial recognition: sensors have been developed in recent years with a proven ability to acquire not only two-dimensional texture information, but also facial shape, that is, three-dimensional information. For this reason, some recent studies have merged the two types of 2D and 3D information to take advantage of each of them and obtain a hybrid system that improves the recognition as the only modality [ 98 ].
  • Deep learning (DL): a very broad concept, which means that it has no exact definition, but studies [ 14 , 110 , 111 , 112 , 113 , 121 , 130 , 131 ] agree that DL includes a set of algorithms that attempt to model high level abstractions, by modeling multiple processing layers. This field of research began in the 1980s and is a branch of automatic learning where algorithms are used in the formation of deep neural networks (DNN) to achieve greater accuracy than other classical techniques. In recent progress, a point has been reached where DL performs better than people in some tasks, for example, to recognize objects in images.

7.2. Conclusions

Author contributions, conflicts of interest.

  • Liao, S.; Jain, A.K.; Li, S.Z. Partial face recognition: Alignment-free approach. IEEE Trans. Pattern Anal. Mach. Intell. 2012 , 35 , 1193–1205. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Jridi, M.; Napoléon, T.; Alfalou, A. One lens optical correlation: Application to face recognition. Appl. Opt. 2018 , 57 , 2087–2095. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Napoléon, T.; Alfalou, A. Pose invariant face recognition: 3D model from single photo. Opt. Lasers Eng. 2017 , 89 , 150–161. [ Google Scholar ] [ CrossRef ]
  • Ouerhani, Y.; Jridi, M.; Alfalou, A. Fast face recognition approach using a graphical processing unit “GPU”. In Proceedings of the 2010 IEEE International Conference on Imaging Systems and Techniques, Thessaloniki, Greece, 1–2 July 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 80–84. [ Google Scholar ]
  • Yang, W.; Wang, S.; Hu, J.; Zheng, G.; Valli, C. A fingerprint and finger-vein based cancelable multi-biometric system. Pattern Recognit. 2018 , 78 , 242–251. [ Google Scholar ] [ CrossRef ]
  • Patel, N.P.; Kale, A. Optimize Approach to Voice Recognition Using IoT. In Proceedings of the 2018 International Conference on Advances in Communication and Computing Technology (ICACCT), Sangamner, India, 8–9 February 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 251–256. [ Google Scholar ]
  • Wang, Q.; Alfalou, A.; Brosseau, C. New perspectives in face correlation research: A tutorial. Adv. Opt. Photonics 2017 , 9 , 1–78. [ Google Scholar ] [ CrossRef ]
  • Alfalou, A.; Brosseau, C.; Kaddah, W. Optimization of decision making for face recognition based on nonlinear correlation plane. Opt. Commun. 2015 , 343 , 22–27. [ Google Scholar ] [ CrossRef ]
  • Zhao, C.; Li, X.; Cang, Y. Bisecting k-means clustering based face recognition using block-based bag of words model. Opt. Int. J. Light Electron Opt. 2015 , 126 , 1761–1766. [ Google Scholar ] [ CrossRef ]
  • HajiRassouliha, A.; Gamage, T.P.B.; Parker, M.D.; Nash, M.P.; Taberner, A.J.; Nielsen, P.M. FPGA implementation of 2D cross-correlation for real-time 3D tracking of deformable surfaces. In Proceedings of the 2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013), Wellington, New Zealand, 27–29 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 352–357. [ Google Scholar ]
  • Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. A comparative study of CFs, LBP, HOG, SIFT, SURF, and BRIEF techniques for face recognition. In Pattern Recognition and Tracking XXIX ; International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 2018; Volume 10649, p. 106490M. [ Google Scholar ]
  • Dehai, Z.; Da, D.; Jin, L.; Qing, L. A pca-based face recognition method by applying fast fourier transform in pre-processing. In 3rd International Conference on Multimedia Technology (ICMT-13) ; Atlantis Press: Paris, France, 2013. [ Google Scholar ]
  • Ouerhani, Y.; Alfalou, A.; Brosseau, C. Road mark recognition using HOG-SVM and correlation. In Optics and Photonics for Information Processing XI ; International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 2017; Volume 10395, p. 103950Q. [ Google Scholar ]
  • Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017 , 234 , 11–26. [ Google Scholar ] [ CrossRef ]
  • Xi, M.; Chen, L.; Polajnar, D.; Tong, W. Local binary pattern network: A deep learning approach for face recognition. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3224–3228. [ Google Scholar ]
  • Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996 , 29 , 51–59. [ Google Scholar ] [ CrossRef ]
  • Gowda, H.D.S.; Kumar, G.H.; Imran, M. Multimodal Biometric Recognition System Based on Nonparametric Classifiers. Data Anal. Learn. 2018 , 43 , 269–278. [ Google Scholar ]
  • Ouerhani, Y.; Jridi, M.; Alfalou, A.; Brosseau, C. Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter. Opt. Commun. 2013 , 289 , 33–44. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Mousa Pasandi, M.E. Face, Age and Gender Recognition Using Local Descriptors. Ph.D. Thesis, Université d’Ottawa/University of Ottawa, Ottawa, ON, Canada, 2014. [ Google Scholar ]
  • Khoi, P.; Thien, L.H.; Viet, V.H. Face Retrieval Based on Local Binary Pattern and Its Variants: A Comprehensive Study. Int. J. Adv. Comput. Sci. Appl. 2016 , 7 , 249–258. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zeppelzauer, M. Automated detection of elephants in wildlife video. EURASIP J. Image Video Process. 2013 , 46 , 2013. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Parmar, D.N.; Mehta, B.B. Face recognition methods & applications. arXiv 2014 , arXiv:1403.0485. [ Google Scholar ]
  • Vinay, A.; Hebbar, D.; Shekhar, V.S.; Murthy, K.B.; Natarajan, S. Two novel detector-descriptor based approaches for face recognition using sift and surf. Procedia Comput. Sci. 2015 , 70 , 185–197. [ Google Scholar ]
  • Yang, H.; Wang, X.A. Cascade classifier for face detection. J. Algorithms Comput. Technol. 2016 , 10 , 187–197. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001. [ Google Scholar ]
  • Rettkowski, J.; Boutros, A.; Göhringer, D. HW/SW Co-Design of the HOG algorithm on a Xilinx Zynq SoC. J. Parallel Distrib. Comput. 2017 , 109 , 50–62. [ Google Scholar ] [ CrossRef ]
  • Seo, H.J.; Milanfar, P. Face verification using the lark representation. IEEE Trans. Inf. Forensics Secur. 2011 , 6 , 1275–1286. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Shah, J.H.; Sharif, M.; Raza, M.; Azeem, A. A Survey: Linear and Nonlinear PCA Based Face Recognition Techniques. Int. Arab J. Inf. Technol. 2013 , 10 , 536–545. [ Google Scholar ]
  • Du, G.; Su, F.; Cai, A. Face recognition using SURF features. In MIPPR 2009: Pattern Recognition and Computer Vision ; International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 2009; Volume 7496, p. 749628. [ Google Scholar ]
  • Calonder, M.; Lepetit, V.; Ozuysal, M.; Trzcinski, T.; Strecha, C.; Fua, P. BRIEF: Computing a local binary descriptor very fast. IEEE Trans. Pattern Anal. Mach. Intell. 2011 , 34 , 1281–1298. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Smach, F.; Miteran, J.; Atri, M.; Dubois, J.; Abid, M.; Gauthier, J.P. An FPGA-based accelerator for Fourier Descriptors computing for color object recognition using SVM. J. Real-Time Image Process. 2007 , 2 , 249–258. [ Google Scholar ] [ CrossRef ]
  • Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. A novel face detection approach using local binary pattern histogram and support vector machine. In Proceedings of the 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET), Hammamet, Tunisia, 22–25 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 28–33. [ Google Scholar ]
  • Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C. Optical image authentication scheme using dual polarization decoding configuration. Opt. Lasers Eng. 2019 , 112 , 151–161. [ Google Scholar ] [ CrossRef ]
  • Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991 , 3 , 71–86. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Annalakshmi, M.; Roomi, S.M.M.; Naveedh, A.S. A hybrid technique for gender classification with SLBP and HOG features. Clust. Comput. 2019 , 22 , 11–20. [ Google Scholar ] [ CrossRef ]
  • Hussain, S.U.; Napoléon, T.; Jurie, F. Face Recognition Using Local Quantized Patterns ; HAL: Bengaluru, India, 2012. [ Google Scholar ]
  • Alfalou, A.; Brosseau, C. Understanding Correlation Techniques for Face Recognition: From Basics to Applications. In Face Recognition ; Oravec, M., Ed.; IntechOpen: Rijeka, Croatia, 2010. [ Google Scholar ]
  • Napoléon, T.; Alfalou, A. Local binary patterns preprocessing for face identification/verification using the VanderLugt correlator. In Optical Pattern Recognition XXV ; International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 2014; Volume 9094, p. 909408. [ Google Scholar ]
  • Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [ Google Scholar ]
  • Kambi Beli, I.; Guo, C. Enhancing face identification using local binary patterns and k-nearest neighbors. J. Imaging 2017 , 3 , 37. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P. Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques. Opt. Commun. 2015 , 356 , 256–268. [ Google Scholar ] [ CrossRef ]
  • Bonnen, K.; Klare, B.F.; Jain, A.K. Component-based representation in automated face recognition. IEEE Trans. Inf. Forensics Secur. 2012 , 8 , 239–253. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Ren, J.; Jiang, X.; Yuan, J. Relaxed local ternary pattern for face recognition. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 3680–3684. [ Google Scholar ]
  • Karaaba, M.; Surinta, O.; Schomaker, L.; Wiering, M.A. Robust face recognition by computing distances from multiple histograms of oriented gradients. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 203–209. [ Google Scholar ]
  • Huang, C.; Huang, J. A fast HOG descriptor using lookup table and integral image. arXiv 2017 , arXiv:1703.06256. [ Google Scholar ]
  • Arigbabu, O.A.; Ahmad, S.M.S.; Adnan, W.A.W.; Yussof, S.; Mahmood, S. Soft biometrics: Gender recognition from unconstrained face images using local feature descriptor. arXiv 2017 , arXiv:1702.02537. [ Google Scholar ]
  • Lugh, A.V. Signal detection by complex spatial filtering. IEEE Trans. Inf. Theory 1964 , 10 , 139. [ Google Scholar ]
  • Weaver, C.S.; Goodman, J.W. A technique for optically convolving two functions. Appl. Opt. 1966 , 5 , 1248–1249. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Horner, J.L.; Gianino, P.D. Phase-only matched filtering. Appl. Opt. 1984 , 23 , 812–816. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Leonard, I.; Alfalou, A.; Brosseau, C. Face recognition based on composite correlation filters: Analysis of their performances. In Face Recognition: Methods, Applications and Technology ; Nova Science Pub Inc.: London, UK, 2012. [ Google Scholar ]
  • Katz, P.; Aron, M.; Alfalou, A. A Face-Tracking System to Detect Falls in the Elderly ; SPIE Newsroom; SPIE: Bellingham, WA, USA, 2013. [ Google Scholar ]
  • Alfalou, A.; Brosseau, C.; Katz, P.; Alam, M.S. Decision optimization for face recognition based on an alternate correlation plane quantification metric. Opt. Lett. 2012 , 37 , 1562–1564. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.E. Adapted all-numerical correlator for face recognition applications. In Optical Pattern Recognition XXIV ; International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 2013; Volume 8748, p. 874807. [ Google Scholar ]
  • Heflin, B.; Scheirer, W.; Boult, T.E. For your eyes only. In Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision (WACV), Breckenridge, CO, USA, 9–11 January 2012; pp. 193–200. [ Google Scholar ]
  • Zhu, X.; Liao, S.; Lei, Z.; Liu, R.; Li, S.Z. Feature correlation filter for face recognition. In Advances in Biometrics, Proceedings of the International Conference on Biometrics, Seoul, Korea, 27–29 August 2007 ; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4642, pp. 77–86. [ Google Scholar ]
  • Lenc, L.; Král, P. Automatic face recognition system based on the SIFT features. Comput. Electr. Eng. 2015 , 46 , 256–272. [ Google Scholar ] [ CrossRef ]
  • Işık, Ş. A comparative evaluation of well-known feature detectors and descriptors. Int. J. Appl. Math. Electron. Comput. 2014 , 3 , 1–6. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Mahier, J.; Hemery, B.; El-Abed, M.; El-Allam, M.; Bouhaddaoui, M.; Rosenberger, C. Computation evabio: A tool for performance evaluation in biometrics. Int. J. Autom. Identif. Technol. 2011 , 24 , hal-00984026. [ Google Scholar ]
  • Alahi, A.; Ortiz, R.; Vandergheynst, P. Freak: Fast retina keypoint. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 510–517. [ Google Scholar ]
  • Arashloo, S.R.; Kittler, J. Efficient processing of MRFs for unconstrained-pose face recognition. In Proceedings of the 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), Rlington, VA, USA, 29 September–2 October 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–8. [ Google Scholar ]
  • Ghorbel, A.; Tajouri, I.; Aydi, W.; Masmoudi, N. A comparative study of GOM, uLBP, VLC and fractional Eigenfaces for face recognition. In Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS), Hammamet, Tunisia, 5–7 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–5. [ Google Scholar ]
  • Lima, A.; Zen, H.; Nankaku, Y.; Miyajima, C.; Tokuda, K.; Kitamura, T. On the use of kernel PCA for feature extraction in speech recognition. IEICE Trans. Inf. Syst. 2004 , 87 , 2802–2811. [ Google Scholar ]
  • Devi, B.J.; Veeranjaneyulu, N.; Kishore, K.V.K. A novel face recognition system based on combining eigenfaces with fisher faces using wavelets. Procedia Comput. Sci. 2010 , 2 , 44–51. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Simonyan, K.; Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Fisher vector faces in the wild. In Proceedings of the BMVC 2013—British Machine Vision Conference, Bristol, UK, 9–13 September 2013. [ Google Scholar ]
  • Li, B.; Ma, K.K. Fisherface vs. eigenface in the dual-tree complex wavelet domain. In Proceedings of the 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 30–33. [ Google Scholar ]
  • Agarwal, R.; Jain, R.; Regunathan, R.; Kumar, C.P. Automatic Attendance System Using Face Recognition Technique. In Proceedings of the 2nd International Conference on Data Engineering and Communication Technology ; Springer: Singapore, 2019; pp. 525–533. [ Google Scholar ]
  • Cui, Z.; Li, W.; Xu, D.; Shan, S.; Chen, X. Fusing robust face region descriptors via multiple metric learning for face recognition in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, Portland, OR, USA, 23–28 June 2013; pp. 3554–3561. [ Google Scholar ]
  • Prince, S.; Li, P.; Fu, Y.; Mohammed, U.; Elder, J. Probabilistic models for inference about identity. IEEE Trans. Pattern Anal. Mach. Intell. 2011 , 34 , 144–157. [ Google Scholar ]
  • Perlibakas, V. Face recognition using principal component analysis and log-gabor filters. arXiv 2006 , arXiv:cs/0605025. [ Google Scholar ]
  • Huang, Z.H.; Li, W.J.; Shang, J.; Wang, J.; Zhang, T. Non-uniform patch based face recognition via 2D-DWT. Image Vision Comput. 2015 , 37 , 12–19. [ Google Scholar ] [ CrossRef ]
  • Sufyanu, Z.; Mohamad, F.S.; Yusuf, A.A.; Mamat, M.B. Enhanced Face Recognition Using Discrete Cosine Transform. Eng. Lett. 2016 , 24 , 52–61. [ Google Scholar ]
  • Hoffmann, H. Kernel PCA for novelty detection. Pattern Recognit. 2007 , 40 , 863–874. [ Google Scholar ] [ CrossRef ]
  • Arashloo, S.R.; Kittler, J. Class-specific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features. IEEE Trans. Inf. Forensics Secur. 2014 , 9 , 2100–2109. [ Google Scholar ] [ CrossRef ]
  • Vinay, A.; Shekhar, V.S.; Murthy, K.B.; Natarajan, S. Performance study of LDA and KFA for gabor based face recognition system. Procedia Comput. Sci. 2015 , 57 , 960–969. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Sivasathya, M.; Joans, S.M. Image Feature Extraction using Non Linear Principle Component Analysis. Procedia Eng. 2012 , 38 , 911–917. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zhang, B.; Chen, X.; Shan, S.; Gao, W. Nonlinear face recognition based on maximum average margin criterion. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 554–559. [ Google Scholar ]
  • Vankayalapati, H.D.; Kyamakya, K. Nonlinear feature extraction approaches with application to face recognition over large databases. In Proceedings of the 2009 2nd International Workshop on Nonlinear Dynamics and Synchronization, Klagenfurt, Austria, 20–21 July 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 44–48. [ Google Scholar ]
  • Javidi, B.; Li, J.; Tang, Q. Optical implementation of neural networks for face recognition by the use of nonlinear joint transform correlators. Appl. Opt. 1995 , 34 , 3950–3962. [ Google Scholar ] [ CrossRef ]
  • Yang, J.; Frangi, A.F.; Yang, J.Y. A new kernel Fisher discriminant algorithm with application to face recognition. Neurocomputing 2004 , 56 , 415–421. [ Google Scholar ] [ CrossRef ]
  • Pang, Y.; Liu, Z.; Yu, N. A new nonlinear feature extraction method for face recognition. Neurocomputing 2006 , 69 , 949–953. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Fei, P.; Fan, X.; Li, H. Face recognition using nonlinear locality preserving with deep networks. In Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, Hunan, China, 19–21 August 2015; ACM: New York, NY, USA, 2015; p. 66. [ Google Scholar ]
  • Li, S.; Yao, Y.F.; Jing, X.Y.; Chang, H.; Gao, S.Q.; Zhang, D.; Yang, J.Y. Face recognition based on nonlinear DCT discriminant feature extraction using improved kernel DCV. IEICE Trans. Inf. Syst. 2009 , 92 , 2527–2530. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Khan, S.A.; Ishtiaq, M.; Nazir, M.; Shaheen, M. Face recognition under varying expressions and illumination using particle swarm optimization. J. Comput. Sci. 2018 , 28 , 94–100. [ Google Scholar ] [ CrossRef ]
  • Hafez, S.F.; Selim, M.M.; Zayed, H.H. 2d face recognition system based on selected gabor filters and linear discriminant analysis lda. arXiv 2015 , arXiv:1503.03741. [ Google Scholar ]
  • Shanbhag, S.S.; Bargi, S.; Manikantan, K.; Ramachandran, S. Face recognition using wavelet transforms-based feature extraction and spatial differentiation-based pre-processing. In Proceedings of the 2014 International Conference on Science Engineering and Management Research (ICSEMR), Chennai, India, 27–29 November 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–8. [ Google Scholar ]
  • Fan, J.; Chow, T.W. Exactly Robust Kernel Principal Component Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2019 . [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Vinay, A.; Cholin, A.S.; Bhat, A.D.; Murthy, K.B.; Natarajan, S. An Efficient ORB based Face Recognition framework for Human-Robot Interaction. Procedia Comput. Sci. 2018 , 133 , 913–923. [ Google Scholar ]
  • Lu, J.; Plataniotis, K.N.; Venetsanopoulos, A.N. Face recognition using kernel direct discriminant analysis algorithms. IEEE Trans. Neural Netw. 2003 , 14 , 117–126. [ Google Scholar ] [ PubMed ] [ Green Version ]
  • Yang, W.J.; Chen, Y.C.; Chung, P.C.; Yang, J.F. Multi-feature shape regression for face alignment. EURASIP J. Adv. Signal Process. 2018 , 2018 , 51. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Ouanan, H.; Ouanan, M.; Aksasse, B. Non-linear dictionary representation of deep features for face recognition from a single sample per person. Procedia Comput. Sci. 2018 , 127 , 114–122. [ Google Scholar ] [ CrossRef ]
  • Fathima, A.A.; Ajitha, S.; Vaidehi, V.; Hemalatha, M.; Karthigaiveni, R.; Kumar, R. Hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis. In Proceedings of the 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS), Bhubaneswar, India, 2–3 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 220–225. [ Google Scholar ]
  • Barkan, O.; Weill, J.; Wolf, L.; Aronowitz, H. Fast high dimensional vector multiplication face recognition. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1960–1967. [ Google Scholar ]
  • Juefei-Xu, F.; Luu, K.; Savvides, M. Spartans: Single-sample periocular-based alignment-robust recognition technique applied to non-frontal scenarios. IEEE Trans. Image Process. 2015 , 24 , 4780–4795. [ Google Scholar ] [ CrossRef ]
  • Yan, Y.; Wang, H.; Suter, D. Multi-subregion based correlation filter bank for robust face recognition. Pattern Recognit. 2014 , 47 , 3487–3501. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Ding, C.; Tao, D. Robust face recognition via multimodal deep face representation. IEEE Trans. Multimed. 2015 , 17 , 2049–2058. [ Google Scholar ] [ CrossRef ]
  • Sharma, R.; Patterh, M.S. A new pose invariant face recognition system using PCA and ANFIS. Optik 2015 , 126 , 3483–3487. [ Google Scholar ] [ CrossRef ]
  • Moussa, M.; Hmila, M.; Douik, A. A Novel Face Recognition Approach Based on Genetic Algorithm Optimization. Stud. Inform. Control 2018 , 27 , 127–134. [ Google Scholar ] [ CrossRef ]
  • Mian, A.; Bennamoun, M.; Owens, R. An efficient multimodal 2D-3D hybrid approach to automatic face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2007 , 29 , 1927–1943. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cho, H.; Roberts, R.; Jung, B.; Choi, O.; Moon, S. An efficient hybrid face recognition algorithm using PCA and GABOR wavelets. Int. J. Adv. Robot. Syst. 2014 , 11 , 59. [ Google Scholar ] [ CrossRef ]
  • Guru, D.S.; Suraj, M.G.; Manjunath, S. Fusion of covariance matrices of PCA and FLD. Pattern Recognit. Lett. 2011 , 32 , 432–440. [ Google Scholar ] [ CrossRef ]
  • Sing, J.K.; Chowdhury, S.; Basu, D.K.; Nasipuri, M. An improved hybrid approach to face recognition by fusing local and global discriminant features. Int. J. Biom. 2012 , 4 , 144–164. [ Google Scholar ] [ CrossRef ]
  • Kamencay, P.; Zachariasova, M.; Hudec, R.; Jarina, R.; Benco, M.; Hlubik, J. A novel approach to face recognition using image segmentation based on spca-knn method. Radioengineering 2013 , 22 , 92–99. [ Google Scholar ]
  • Sun, J.; Fu, Y.; Li, S.; He, J.; Xu, C.; Tan, L. Sequential Human Activity Recognition Based on Deep Convolutional Network and Extreme Learning Machine Using Wearable Sensors. J. Sens. 2018 , 2018 , 10. [ Google Scholar ] [ CrossRef ]
  • Soltanpour, S.; Boufama, B.; Wu, Q.J. A survey of local feature methods for 3D face recognition. Pattern Recognit. 2017 , 72 , 391–406. [ Google Scholar ] [ CrossRef ]
  • Sharma, G.; ul Hussain, S.; Jurie, F. Local higher-order statistics (LHS) for texture categorization and facial analysis. In European Conference on Computer Vision ; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–12. [ Google Scholar ]
  • Zhang, J.; Marszałek, M.; Lazebnik, S.; Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Comput. Vis. 2007 , 73 , 213–238. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Leonard, I.; Alfalou, A.; Brosseau, C. Spectral optimized asymmetric segmented phase-only correlation filter. Appl. Opt. 2012 , 51 , 2638–2650. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Shen, L.; Bai, L.; Ji, Z. A svm face recognition method based on optimized gabor features. In International Conference on Advances in Visual Information Systems ; Springer: Berlin/Heidelberg, Germany, 2007; pp. 165–174. [ Google Scholar ]
  • Pratima, D.; Nimmakanti, N. Pattern Recognition Algorithms for Cluster Identification Problem. Int. J. Comput. Sci. Inform. 2012 , 1 , 2231–5292. [ Google Scholar ]
  • Zhang, C.; Prasanna, V. Frequency domain acceleration of convolutional neural networks on CPU-FPGA shared memory system. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Monterey, CA, USA, 22–24 February 2017; ACM: New York, NY, USA, 2017; pp. 35–44. [ Google Scholar ]
  • Nguyen, D.T.; Pham, T.D.; Lee, M.B.; Park, K.R. Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information. Sensors 2019 , 19 , 410. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep face recognition. In Proceedings of the BMVC 2015—British Machine Vision Conference, Swansea, UK, 7–10 September.
  • Wen, Y.; Zhang, K.; Li, Z.; Qiao, Y. A discriminative feature learning approach for deep face recognition. In European Conference on Computer Vision ; Springer: Berlin/Heidelberg, Germany, 2016; pp. 499–515. [ Google Scholar ]
  • Passalis, N.; Tefas, A. Spatial bag of features learning for large scale face image retrieval. In INNS Conference on Big Data ; Springer: Berlin/Heidelberg, Germany, 2016; pp. 8–17. [ Google Scholar ]
  • Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; Song, L. Sphereface: Deep hypersphere embedding for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 212–220. [ Google Scholar ]
  • Amato, G.; Falchi, F.; Gennaro, C.; Massoli, F.V.; Passalis, N.; Tefas, A.; Vairo, C. Face Verification and Recognition for Digital Forensics and Information Security. In Proceedings of the 2019 7th International Symposium on Digital Forensics and Security (ISDFS), Barcelos, Portugal, 10–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [ Google Scholar ]
  • Taigman, Y.; Yang, M.; Ranzato, M.A. Wolf, LDeepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, Washington, DC, USA, 23–28 June 2014; pp. 1701–1708. [ Google Scholar ]
  • Ma, Z.; Ding, Y.; Li, B.; Yuan, X. Deep CNNs with Robust LBP Guiding Pooling for Face Recognition. Sensors 2018 , 18 , 3876. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Koo, J.; Cho, S.; Baek, N.; Kim, M.; Park, K. CNN-Based Multimodal Human Recognition in Surveillance Environments. Sensors 2018 , 18 , 3040. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Cho, S.; Baek, N.; Kim, M.; Koo, J.; Kim, J.; Park, K. Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network. Sensors 2018 , 18 , 2995. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Koshy, R.; Mahmood, A. Optimizing Deep CNN Architectures for Face Liveness Detection. Entropy 2019 , 21 , 423. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Elmahmudi, A.; Ugail, H. Deep face recognition using imperfect facial data. Future Gener. Comput. Syst. 2019 , 99 , 213–225. [ Google Scholar ] [ CrossRef ]
  • Seibold, C.; Samek, W.; Hilsmann, A.; Eisert, P. Accurate and robust neural networks for security related applications exampled by face morphing attacks. arXiv 2018 , arXiv:1806.04265. [ Google Scholar ]
  • Yim, J.; Jung, H.; Yoo, B.; Choi, C.; Park, D.; Kim, J. Rotating your face using multi-task deep neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 676–684. [ Google Scholar ]
  • Bajrami, X.; Gashi, B.; Murturi, I. Face recognition performance using linear discriminant analysis and deep neural networks. Int. J. Appl. Pattern Recognit. 2018 , 5 , 240–250. [ Google Scholar ] [ CrossRef ]
  • Gourier, N.; Hall, D.; Crowley, J.L. Estimating Face Orientation from Robust Detection of Salient Facial Structures. Available online: venus.inrialpes.fr/jlc/papers/Pointing04-Gourier.pdf (accessed on 15 December 2019).
  • Gonzalez-Sosa, E.; Fierrez, J.; Vera-Rodriguez, R.; Alonso-Fernandez, F. Facial soft biometrics for recognition in the wild: Recent works, annotation, and COTS evaluation. IEEE Trans. Inf. Forensics Secur. 2018 , 13 , 2001–2014. [ Google Scholar ] [ CrossRef ]
  • Boukamcha, H.; Hallek, M.; Smach, F.; Atri, M. Automatic landmark detection and 3D Face data extraction. J. Comput. Sci. 2017 , 21 , 340–348. [ Google Scholar ] [ CrossRef ]
  • Ouerhani, Y.; Jridi, M.; Alfalou, A.; Brosseau, C. Graphics processor unit implementation of correlation technique using a segmented phase only composite filter. Opt. Commun. 2013 , 289 , 33–44. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Su, C.; Yan, Y.; Chen, S.; Wang, H. An efficient deep neural networks training framework for robust face recognition. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3800–3804. [ Google Scholar ]
  • Coşkun, M.; Uçar, A.; Yildirim, Ö.; Demir, Y. Face recognition based on convolutional neural network. In Proceedings of the 2017 International Conference on Modern Electrical and Energy Systems (MEES), Kremenchuk, Ukraine, 15–17 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 376–379. [ Google Scholar ]

Click here to enlarge figure

Author/Technique UsedDatabaseMatchingLimitationAdvantageResult
Local Appearance-Based Techniques
Khoi et al. [ ]LBPTDFMAPSkewness in face imageRobust feature in fontal face5%
CF199913.03%
LFW90.95%
Xi et al. [ ]LBPNetFERETCosine similarityComplexities of CNNHigh recognition accuracy97.80%
LFW94.04%
Khoi et al. [ ]PLBPTDFMAPSkewness in face imageRobust feature in fontal face5.50%
CF9.70%
LFW91.97%
Laure et al. [ ]LBP and KNNLFWKNNIllumination conditionsRobust85.71%
CMU-PIE99.26%
Bonnen et al. [ ]MRF and MLBPAR (Scream)Cosine similarityLandmark extraction fails or is not idealRobust to changes in facial expression86.10%
FERET (Wearing sunglasses) 95%
Ren et al. [ ]Relaxed LTPCMU-PIEChisquare distanceNoise levelSuperior performance compared with LBP, LTP95.75%
Yale B98.71%
Hussain et al. [ ]LPQFERET/Cosine similarityLot of discriminative informationRobust to illumination variations99.20%
LFW75.30%
Karaaba et al. [ ]HOG and MMDFERETMMD/MLPDLow recognition accuracyAligning difficulties68.59%
LFW23.49%
Arigbabu et al. [ ]PHOG and SVMLFWSVMComplexity and time of computationHead pose variation88.50%
Leonard et al. [ ]VLC correlatorPHPIDASPOFThe low number of the reference image usedRobustness to noise92%
Napoléon et al. [ ]LBP and VLCYaleBPOFIlluminationRotation + Translation98.40%
YaleB Extended95.80%
Heflin et al. [ ]correlation filterLFW/PHPIDPSRSome pre-processing steps More effort on the eye localization stage39.48%
Zhu et al. [ ]PCA–FCFCMU-PIECorrelation filterUse only linear methodOcclusion-insensitive96.60%
FRGC2.091.92%
Seo et al. [ ]LARK + PCALFWCosine similarityFace detectionReducing computational complexity78.90%
Ghorbel et al. [ ]VLC + DoGFERETPCELow recognition rateRobustness81.51%
Ghorbel et al. [ ]uLBP + DoGFERETchi-square distanceRobustnessProcessing time93.39%
Ouerhani et al. [ ]VLCPHPIDPCEPowerProcessing time77%
Lenc et al. [ ]SIFTFERETa posterior probabilityStill far to be perfectSufficiently robust on lower quality real data97.30%
AR95.80%
LFW98.04%
Du et al. [ ]SURFLFWFLANN distanceProcessing timeRobustness and distinctiveness95.60%
Vinay et al. [ ]SURF + SIFTLFWFLANNProcessing timeRobust in unconstrained scenarios78.86%
Face94distance96.67%
Calonder et al. [ ]BRIEF_KNNLow recognition rateLow processing time48%
Author/Techniques UsedDatabases MatchingLimitationAdvantage Result
Linear Techniques
Seo et al. [ ]LARK and PCALFWL2 distanceDetection accuracyReducing computational complexity85.10%
Annalakshmi et al. [ ]ICA and LDALFWBayesian ClassifierSensitivity Good accuracy88%
Annalakshmi et al. [ ]PCA and LDALFWBayesian ClassifierSensitivity Specificity59%
Hussain et al. [ ]LQP and GaborFERETCosine similarityLot of discriminative informationRobust to illumination variations99.2%
75.3%
LFW
Gowda et al. [ ]LPQ and LDAMEPCOSVM Computation timeGood accuracy99.13%
Z. Cui et al. [ ]BoWARASMOcclusionsRobust99.43%
ORL 99.50%
FERET82.30%
Khan et al. [ ]PSO and DWTCKEuclidienne distanceNoiseRobust to illumination98.60%
MMI95.50%
JAFFE98.80%
Huang et al. [ ]2D-DWTFERETKNNPoseFrontal or near-frontal facial images90.63%
97.10%
LFW
Perlibakas and Vytautas [ ]PCA and Gabor filterFERETCosine metricPrecisionPose87.77%
Hafez et al. [ ]Gabor filter and LDAORL2DNCC PoseGood recognition performance98.33%
C. YaleB99.33%
Sufyanu et al. [ ]DCTORLNCCHigh memoryControlled and uncontrolled databases93.40%
Yale
Shanbhag et al. [ ]DWT and BPSO_ __ _RotationSignificant reduction in the number of features88.44%
Ghorbel et al. [ ]Eigenfaces and DoG filterFERETChi-square distanceProcessing timeReduce the representation84.26%
Zhang et al. [ ]PCA and FFTYALESVMComplexityDiscrimination93.42%
Zhang et al. [ ]PCAYALESVMRecognition rateReduce the dimensionality 84.21%
Fan et al. [ ]RKPCAMNIST ORL RBF kernelComplexityRobust to sparse noises_
Vinay et al. [ ] ORB and KPCAORLFLANN MatchingProcessing timeRobust87.30%
Vinay et al. [ ]SURF and KPCAORLFLANN MatchingProcessing timeReduce the dimensionality80.34%
Vinay et al. [ ]SIFT and KPCAORLFLANN MatchingLow recognition rateComplexity69.20%
Lu et al. [ ]KPCA and GDAUMIST faceSVMHigh error rate Excellent performance48%
Yang et al. [ ]PCA and MSRHELEN faceESRComplexityUtilizes color, gradient, and regional information98.00%
Yang et al. [ ]LDA and MSRFRGCESRLow performancesUtilizes color, gradient, and regional information90.75%
Ouanan et al. [ ]FDDL ARCNNOcclusionOrientations, expressions98.00%
Vankayalapati and Kyamakya [ ]CNNORL_ _PosesHigh recognition rate95%
Devi et al. [ ]2FNNORL_ _ComplexityLow error rate98.5
Author/Technique UsedDatabaseMatchingLimitationAdvantage Result
Fathima et al. [ ]GW-LDAAT&Tk-NNHigh processing timeIllumination invariant and reduce the dimensionality88%
FACES9494.02%
MITINDIA88.12%
Barkan et al., [ ]OCLBP, LDA, and WCCNLFWWCCN_Reduce the dimensionality87.85%
Juefei et al. [ ]ACF and WLBPLFW ComplexityPose conditions89.69%
Simonyan et al. [ ]Fisher + SIFTLFWMahalanobis matrixSingle feature typeRobust87.47%
Sharma et al. [ ]PCA–ANFISORLANFISSensitivity-specificity 96.66%
ICA–ANFISANFISPose conditions71.30%
LDA–ANFISANFIS 68%
Ojala et al. [ ] DCT–PCAORLEuclidian distanceComplexityReduce the dimensionality92.62%
UMIST99.40%
YALE95.50%
Mian et al. [ ] Hotelling transform, SIFT, and ICPFRGCICPProcessing timeFacial expressions99.74%
Cho et al. [ ]PCA–LGBPHSExtended Yale FaceBhattacharyya distanceIllumination conditionComplexity95%
PCA–GABOR Wavelets
Sing et al. [ ]PCA–FLDCMUSVMRobustnessPose, illumination, and expression71.98%
FERET94.73%
AR68.65%
Kamencay et al. [ ]SPCA-KNNESSEXKNNProcessing timeExpression variation96.80%
Sun et al. [ ]CNN–LSTM–ELMOPPORTUNITYLSTM/ELMHigh processing timeAutomatically learn feature representations90.60%
Ding et al. [ ]CNNs and SAELFW_ _ComplexityHigh recognition rate99%
ApproachesDatabases UsedAdvantagesDisadvantagesPerformancesChallenges Handled
TDF, CF1999,
LFW, FERET,
CMU-PIE, AR,
Yale B, PHPID,
YaleB Extended, FRGC2.0, Face94.
]. , ]. , ]. ], various lighting conditions[ ], facial expressions [ ], and low resolution.
]. ]. ]. ]. ]. ].
LFW, FERET, MEPCO, AR, ORL, CK, MMI, JAFFE,
C. Yale B, Yale, MNIST, ORL, UMIST face, HELEN face, FRGC.
, ]. , , , ]. ]. ]. ]. , ], scaling, facial expressions.
, , ]. , , ]. ]. , ]. ]. , ]. , ], poses [ ], conditions, scaling, facial expressions.
AT&T, FACES94,
MITINDIA, LFW, ORL, UMIST, YALE, FRGC, Extended Yale, CMU, FERET, AR, ESSEX.
]. , , ]. ]. ]. , ].

Share and Cite

Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. Face Recognition Systems: A Survey. Sensors 2020 , 20 , 342. https://doi.org/10.3390/s20020342

Kortli Y, Jridi M, Al Falou A, Atri M. Face Recognition Systems: A Survey. Sensors . 2020; 20(2):342. https://doi.org/10.3390/s20020342

Kortli, Yassin, Maher Jridi, Ayman Al Falou, and Mohamed Atri. 2020. "Face Recognition Systems: A Survey" Sensors 20, no. 2: 342. https://doi.org/10.3390/s20020342

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

PHD PRIME

Face Recognition Thesis

Face recognition is one of the thriving technologies which effectively analysis images and videos to recognize faces. It is presented in every technology to identify the users in order to ensure the correct access to the intended accounts. Smart mobile phones are the greatest example of face recognition . The former decade’s researches have significantly contributed their part to developing face-recognizing techniques. “In this article, we have clearly stated about the things to be considered before writing a face recognition thesis ”

In the earlier stages, techniques such as pattern recognition & computer vision are tried their best to offer consistent face recognition results by detecting and tracking the faces . In fact, now systems are developed well to perform all the required processes regarding face recognition. As the digital world is subject to security and privacy matters, it is demanding identity recognitions hence face-recognizing systems contributing their superlative features.

At the end of this article, you can able to write your own face recognition thesis . In point of fact, feature extraction is one of the important procedures of face recognition. In other words, it is the process of extracting the necessary data from a given image or video . These data can be used for the identification or verification processes of the subjects with reduced error rates.

Further, Evaluating time/duration & memory consumptions are determining the efficiency of the face recognition processes . In addition, outcomes of the face recognition are dealt with the optimization for the further classification phases. Now we can have the section about how the face recognition procedure works in real-time with clear handy notes.  

Research Guidance to implement face recognition thesis

How Face Recognition Works?

The face recognition system takes input in the form of image or video frames and verifies or identifies the subjects presented in the inputs. For your better understanding, we have vibrantly itemized the steps that are commonly used to recognize the faces of human beings.

  • Face Recognition Inputs
  • Face (Points) Detection
  • Facial Feature Extraction
  • Face Recognition
  • Facial Emotion Recognition

The aforesaid are the major steps involved in recognizing the human faces besides there are several approaches that are often used to achieve the predetermined aspects. In that, some of the significant approaches practiced in every step are mentioned for making your understanding as strong.

  • Image Compression
  • Posture Analysis
  • Face Tracing
  • Gaze Evaluation
  • Sentiment / Emotion Recognition
  • Feature Geometries
  • Holistic Template
  • Hybrid Approaches

The final stage of face recognition can be done with the help of above listed other steps. In the point of fact, verification (authentication) & identification processes are pillared by these steps. There is a database that is stored with a massive amount of images regarding human faces.

If an image or video is given into the face recognition processes, the system will perform the one-to-many matching and will get the identity report from the database. On the other hand,  the identification  process   is done even if the input is unknown (unidentified), the system would match the individuals with the known individuals (identified) presented in that database. 

In addition,  the verification  process is performed according to the one-to-one face matching tasks. Here the unknown users are authenticated by the identity claims. This is how face recognition works in general. Moreover, it is also vital to consider the accuracy of the face-recognizing systems . Reach us for more interesting face recognition thesis topics. Here, one of the major factors is illustrated that is affecting the face recognition accuracy levels for your better understanding.   

Factors Affecting Face Recognition Accuracy 

The accuracy of face recognition highly depends upon the  selection of the FRT  (Face Recognition Technology) techniques/algorithms. Those algorithms are explained below.

  • Focuses on the facial features like eyes, nose & mouth, etc.
  • Appearance algorithms never consider the facial feature reflection & postures
  • Hybrid is the combination of feature & appearance Face Recognition algorithms
  • It is intended and proposed to diminish the false rates in recognition

In short, the selection of the Face Recognition algorithm is very important to achieve the great accuracy levels of face recognition. If you are trying your researches in face recognition areas consider handpicking the Face Recognition techniques . If it is failed there might be a lot of chances to face issues. In this regard, let’s discuss the key issues that are presented in the face recognition systems in general for your better understanding.

Key Issues in Face Recognition

  • Illumination / Lighting
  • Facial Expression / Manifestation
  • Facial Feature Occlusion / Obstruction
  • Artifacts in Images
  • Posture Variations / Dissimilarities
  • Image Rotation / Spinning

These listed are some of the main issues that are aroused in facial recognition. In point of fact, facial recognition is a little complex in nature because in recent days high-resolution cameras are having the capacity to renovate the facial feature with the help of their inbuilt features. In addition to that quality of the face, recognition falls under a question mark . In the subsequent passage, we have mentioned to you how we can enhance the facial recognition processes.

How Can Facial Recognition be improved?

Facial recognition processes can be improved according to the quality of images/videos acquired. The quality of the images is manipulated by the factors such as illumination (lighting), occlusion (obstruction) & postures/gestures of the individuals . In fact, these can be achieved with the application of several approaches as mentioned in the upcoming section.

  • Representation of Components
  • Clustered Structure & Component Representation
  • Global Level Representation
  • Local Supporting Features
  • Global Supporting Features
  • Manual/Handcrafted Extraction
  • Image Texture
  • Face Shapes
  • Learning oriented Extraction
  • Decision Trees
  • Dictionaries
  • Deep Neural Network
  • Model oriented Extraction
  • Graph & Shape
  • Appearance oriented Extraction
  • Multi-Linear
  • Liner & Non-Linear

Afore listed are the various approaches that are supporting attain the improvements in facial recognition. Our researchers in the concern are well expert in the approaches of face recognition technology. This is becoming possible by conducting so many concurrent pieces of research in every single area of technology.

So the capacities of our engineers are getting multiplicities in various aspects and perceptions. In addition to this area, let us also discuss the classical face recognition algorithms and methods with their pros and cons for the ease of your understanding.  

Classified Face Recognition Algorithms & Methods

  • Redundancies Application
  • Image Enrichment
  • Ineffective Analysis
  • Lack of Determining Similarities
  • High Accuracy Levels
  • Resilient Outcomes
  • Illumination-free & Posture-free
  • Lack of 2D Image Compatibility
  • Costly for Real-time Applications
  • Discriminant Feature Learning
  • Minimizes & Maximizes Differences (images)
  • Gels with Low Lighting & Expression Variations
  • Simplified Descriptor Extraction
  • Lack of Intensity
  • Effective Frequency Features
  • Diverse Biometric Application
  • Impractical High Dimensions
  • Sensitive in Illumination Changes
  • Simplified Linear Processes
  • Speed Computations
  • Compatible with Occlusion & Incomplete Data
  • Integration of Negative-free Matrix & Radial Basis
  • Huge Training Models
  • Inaccuracy Results
  • Adaptive with Diverse Structures
  • Linear Subspace Projections
  • Great Mahalanobis Distances
  • Delicate in Lighting Variations
  • Ambiguity in Neighborhood Selection

The itemized above are the major classical methods used in face recognition . In fact, face recognition techniques are full-fledged in identifying the features and preprocessing the same. Further face regions are extracted and backgrounds of the images are segmented.

A separated image is refined under the processes of contrast detection and feature extraction. Facial point detection (nose, mouth, eyes) helps to accurately identify the individuals. By the way, at this time it will be really helpful to know about the recent face recognition project ideas .   

Recent Face Recognition Thesis Ideas

  • 3D & Multi-Dimension based Emotion Recognition
  • Dynamic Facial Posture Detection
  • Face Depression/Sadness Recognition
  • Multi-Level Face Recognition & Clustering
  • Anticipated Spoofing & 3D Face Mask Authentication
  • 3D Face Arrangement & Displaying
  • Multi-Level Face Tracing & Grouping
  • Micro & Dynamic Facial Expression Recognition

The above listed are some of the interesting face recognition thesis ideas , you can also select one of the mentioned ideas to develop your own thesis by considering these ideas as a reference. Here, our researchers of the institute have planned to exhibit the evaluation of the face recognition performance . Come on guys let us try to understand them.

How to Estimate the Performance of Face Recognition?

  • ‘Yes’ and ‘no’ are the two binary decisions in facial verification processes
  • ‘Yes’ signifies similar individuals & ‘no’ signifies the different individuals
  • Errors categorization are done in the verification processes
  • These two binary decisions always give the output in the 4 aspects as,
  • False-negative:  Identification of a single individual in 2 unlike persons
  • False-positive:  Shows the 2 dissimilar persons by way of the same person
  • True negative:  Exactly identifies the   2 dissimilar persons in given images
  • True positive: Identification of identical person is given 2 images

These are some of the important aspects that are needed to be considered while evaluating the performance of face recognition . Thesis writing is one of the major writing works that is required in the master’s degree and PhD academic levels . In fact, thesis writing is nearly similar to the research papers that are proposed in the research areas.

As this article is focused on giving content related to the face recognition thesis we are actually going to state the ways to write a good quality thesis for the ease of your understanding in the subsequent section.

What is the Way to Write a Good Quality Thesis?

  • Gather the previous researches and give your “peculiar examination/analysis” on the “findings”
  • Exhibit your “analytical & critical thinking abilities” and show the “interesting fields” in your thesis
  • Refer to the “earlier researches” to completely discover your own “newfangled   propositions”

On the other hand, it is very important to deliver the findings in a structural manner. As the thesis is the experimental study of the proposed technical areas it is to be presented in an organized way. Actually, we have listed the things that are contained in every thesis for the ease of your understanding in the immediate phase.

  • Introduction or background of the research
  • Literature reviews & related works
  • Research problem findings
  • Methodologies & questions
  • Discussions on outcomes
  • References & final conclusions

The above listed are the ways of constructing a good thesis as well as the contents involved in the structure of every thesis . IN fact, we do encourage the students to act like this as well as help them to attain the levels predetermined. Actually, students from all over the world are being benefited by our services rendered. For your valuable consideration, a pinch of our working style in the areas of dissertation writing has been enumerated in the next section.  

How Do We Write a Thesis?

  • Structuring thesis formats in the referencing panaches like Chicago, Turabian, APA, & MLA
  • Framing the contents with the newfangled ideas & with plagiarism-free
  • Delivering both virtual & offline dissertation writing assistances with privacy
  • Customizing thesis contents as per requirements with supreme quality

This is how we work on thesis writing in fact, this is just a sample of our working manner. Actually, we do have so many interesting fields and assistances for the students of every institution. So far, we have come up with the concepts that are requisites for framing the effective face recognition thesis.   Moreover, we are also expecting you guys to explore more in these areas of technology. If you are really interested then you can approach our technical team at any time and the high-quality thesis guidance is waiting for you. Let’s hurry to avail our services.

face recognition phd thesis

Opening Hours

  • Mon-Sat 09.00 am – 6.30 pm
  • Lunch Time 12.30 pm – 01.30 pm
  • Break Time 04.00 pm – 04.30 pm
  • 18 years service excellence
  • 40+ country reach
  • 36+ university mou
  • 194+ college mou
  • 6000+ happy customers
  • 100+ employees
  • 240+ writers
  • 60+ developers
  • 45+ researchers
  • 540+ Journal tieup

Payment Options

money gram

Our Clients

face recognition phd thesis

Social Links

face recognition phd thesis

  • Terms of Use

face recognition phd thesis

Opening Time

face recognition phd thesis

Closing Time

  • We follow Indian time zone

award1

  • Our Promise
  • Our Achievements
  • Our Mission
  • Proposal Writing
  • System Development
  • Paper Writing
  • Paper Publish
  • Synopsis Writing
  • Thesis Writing
  • Assignments
  • Survey Paper
  • Conference Paper
  • Journal Paper
  • Empirical Paper
  • Journal Support

PhD Projects in Face Recognition

Face Recognition is a  “ strong security-aware area”  in the DIP field.  PhD Projects in Face Recognition  is our wide research service. As a matter of fact, it consists of “erudite environs” for PhD/MS scholars. It happens due to the ever-rising need in the defense system.

At first, it captures one’s bio proof of identity as facial attributes. Then, it analyzes their patterns to detect the original self. At last, it yields an exact outcome as per your needs.

Up-to-date applications of Face Recognition

  • Forensic Detection System
  • Biometric Personal Authentication
  • Video Surveillance System
  • Residential Security System
  • Access Control System
  • Healthcare Monitoring System
  • E-Transaction in Online Shopping
  • Bank ATM Hubs

Bio-Inspired Optimization Algorithms

  • Artificial Bee Colony Algorithm
  • Glowworm Swarm Optimization
  • Invasive Weed Optimization
  • Bumble Bees Mating Optimization
  • Bacterial Colony Optimization
  • Social Spider Optimization
  • Spider Monkey Optimization
  • Ant Lion Optimization Algorithm

To be sure, we are at ease with, above all, face recognition algorithms. Not only in this, but also we are skilled to work on other algorithms ( ML, DL, CNN, and so on ). In any case, we are ready to  “develop a new algorithm”  too.

All in all, we assist you in each aspect of your research trip. In the end, we will deliver your work with  “good quality”  in a short time. On the whole,  PhD projects in Face Recognition  are avid to do all of the real-time projects.

An effective mechanism for Multi-Faces Recognition Process used by Haar Cascades and Eigenface Methods

An innovative function of F-DR Net Face detection and recognition in One Net system

A new process for Wasserstein CNN by Learning Invariant Features intended for NIR-VIS Face Recognition method

An inventive source of Face Segmentation, Face Swapping, and Face Perception practice

The novel technique for Multi-scale feature extraction for single face recognition system

An innovative mechanism for Trunk-Branch Ensemble Convolutional Neural Networks used Video-Based Face Recognition

The novel mechanism for Adaptive Pose Alignment in Pose-Invariant Face Recognition

A novel method for Face Recognition Based on Stacked Convolutional Autoencoder and Sparse Representation system

A new- fangled mechanism for Face Recognition Based on PCA with Weighted and Normalized Mahalanobis distance practice

A novel approach for Face Feature Dynamic Recognition Method Based on Intelligent Image scheme

An effective Performance for Blur and Motion Blur Influence based on Face Recognition system

An effective Performance Evaluation of Software for Face Recognition, Based on Dlib and Opencv Library

The novel mechanism for B-Face on CNN-Based Face Recognition Processor with Face Alignment for Mobile User Identification system

Design and develop Application of Statistical Data Processing intended for Solving Problem of Face Recognition via Principal Components Analysis Method

An innovative performance for Experiments based on Deep Face Recognition by Partial Faces system

The novel methodology for Automatic System for Smile Recognition Based on CNN and Face Detection

Fresh mechanism for LBPH based on improved face recognition at low resolution scheme

A new-fangled mechanism for Fast Face Recognition System Based on Deep Learning

An effective function of Face Recognition used by DRLBP and SIFT Feature Extraction

The new source of  Multi-Face Challenging Dataset designed for Robust Face Recognition

MILESTONE 1: Research Proposal

Finalize journal (indexing).

Before sit down to research proposal writing, we need to decide exact journals. For e.g. SCI, SCI-E, ISI, SCOPUS.

Research Subject Selection

As a doctoral student, subject selection is a big problem. Phdservices.org has the team of world class experts who experience in assisting all subjects. When you decide to work in networking, we assign our experts in your specific area for assistance.

Research Topic Selection

We helping you with right and perfect topic selection, which sound interesting to the other fellows of your committee. For e.g. if your interest in networking, the research topic is VANET / MANET / any other

Literature Survey Writing

To ensure the novelty of research, we find research gaps in 50+ latest benchmark papers (IEEE, Springer, Elsevier, MDPI, Hindawi, etc.)

Case Study Writing

After literature survey, we get the main issue/problem that your research topic will aim to resolve and elegant writing support to identify relevance of the issue.

Problem Statement

Based on the research gaps finding and importance of your research, we conclude the appropriate and specific problem statement.

Writing Research Proposal

Writing a good research proposal has need of lot of time. We only span a few to cover all major aspects (reference papers collection, deficiency finding, drawing system architecture, highlights novelty)

MILESTONE 2: System Development

Fix implementation plan.

We prepare a clear project implementation plan that narrates your proposal in step-by step and it contains Software and OS specification. We recommend you very suitable tools/software that fit for your concept.

Tools/Plan Approval

We get the approval for implementation tool, software, programing language and finally implementation plan to start development process.

Pseudocode Description

Our source code is original since we write the code after pseudocodes, algorithm writing and mathematical equation derivations.

Develop Proposal Idea

We implement our novel idea in step-by-step process that given in implementation plan. We can help scholars in implementation.

Comparison/Experiments

We perform the comparison between proposed and existing schemes in both quantitative and qualitative manner since it is most crucial part of any journal paper.

Graphs, Results, Analysis Table

We evaluate and analyze the project results by plotting graphs, numerical results computation, and broader discussion of quantitative results in table.

Project Deliverables

For every project order, we deliver the following: reference papers, source codes screenshots, project video, installation and running procedures.

MILESTONE 3: Paper Writing

Choosing right format.

We intend to write a paper in customized layout. If you are interesting in any specific journal, we ready to support you. Otherwise we prepare in IEEE transaction level.

Collecting Reliable Resources

Before paper writing, we collect reliable resources such as 50+ journal papers, magazines, news, encyclopedia (books), benchmark datasets, and online resources.

Writing Rough Draft

We create an outline of a paper at first and then writing under each heading and sub-headings. It consists of novel idea and resources

Proofreading & Formatting

We must proofread and formatting a paper to fix typesetting errors, and avoiding misspelled words, misplaced punctuation marks, and so on

Native English Writing

We check the communication of a paper by rewriting with native English writers who accomplish their English literature in University of Oxford.

Scrutinizing Paper Quality

We examine the paper quality by top-experts who can easily fix the issues in journal paper writing and also confirm the level of journal paper (SCI, Scopus or Normal).

Plagiarism Checking

We at phdservices.org is 100% guarantee for original journal paper writing. We never use previously published works.

MILESTONE 4: Paper Publication

Finding apt journal.

We play crucial role in this step since this is very important for scholar’s future. Our experts will help you in choosing high Impact Factor (SJR) journals for publishing.

Lay Paper to Submit

We organize your paper for journal submission, which covers the preparation of Authors Biography, Cover Letter, Highlights of Novelty, and Suggested Reviewers.

Paper Submission

We upload paper with submit all prerequisites that are required in journal. We completely remove frustration in paper publishing.

Paper Status Tracking

We track your paper status and answering the questions raise before review process and also we giving you frequent updates for your paper received from journal.

Revising Paper Precisely

When we receive decision for revising paper, we get ready to prepare the point-point response to address all reviewers query and resubmit it to catch final acceptance.

Get Accept & e-Proofing

We receive final mail for acceptance confirmation letter and editors send e-proofing and licensing to ensure the originality.

Publishing Paper

Paper published in online and we inform you with paper title, authors information, journal name volume, issue number, page number, and DOI link

MILESTONE 5: Thesis Writing

Identifying university format.

We pay special attention for your thesis writing and our 100+ thesis writers are proficient and clear in writing thesis for all university formats.

Gathering Adequate Resources

We collect primary and adequate resources for writing well-structured thesis using published research articles, 150+ reputed reference papers, writing plan, and so on.

Writing Thesis (Preliminary)

We write thesis in chapter-by-chapter without any empirical mistakes and we completely provide plagiarism-free thesis.

Skimming & Reading

Skimming involve reading the thesis and looking abstract, conclusions, sections, & sub-sections, paragraphs, sentences & words and writing thesis chorological order of papers.

Fixing Crosscutting Issues

This step is tricky when write thesis by amateurs. Proofreading and formatting is made by our world class thesis writers who avoid verbose, and brainstorming for significant writing.

Organize Thesis Chapters

We organize thesis chapters by completing the following: elaborate chapter, structuring chapters, flow of writing, citations correction, etc.

Writing Thesis (Final Version)

We attention to details of importance of thesis contribution, well-illustrated literature review, sharp and broad results and discussion and relevant applications study.

How PhDservices.org deal with significant issues ?

1. novel ideas.

Novelty is essential for a PhD degree. Our experts are bringing quality of being novel ideas in the particular research area. It can be only determined by after thorough literature search (state-of-the-art works published in IEEE, Springer, Elsevier, ACM, ScienceDirect, Inderscience, and so on). SCI and SCOPUS journals reviewers and editors will always demand “Novelty” for each publishing work. Our experts have in-depth knowledge in all major and sub-research fields to introduce New Methods and Ideas. MAKING NOVEL IDEAS IS THE ONLY WAY OF WINNING PHD.

2. Plagiarism-Free

To improve the quality and originality of works, we are strictly avoiding plagiarism since plagiarism is not allowed and acceptable for any type journals (SCI, SCI-E, or Scopus) in editorial and reviewer point of view. We have software named as “Anti-Plagiarism Software” that examines the similarity score for documents with good accuracy. We consist of various plagiarism tools like Viper, Turnitin, Students and scholars can get your work in Zero Tolerance to Plagiarism. DONT WORRY ABOUT PHD, WE WILL TAKE CARE OF EVERYTHING.

3. Confidential Info

We intended to keep your personal and technical information in secret and it is a basic worry for all scholars.

  • Technical Info: We never share your technical details to any other scholar since we know the importance of time and resources that are giving us by scholars.
  • Personal Info: We restricted to access scholars personal details by our experts. Our organization leading team will have your basic and necessary info for scholars.

CONFIDENTIALITY AND PRIVACY OF INFORMATION HELD IS OF VITAL IMPORTANCE AT PHDSERVICES.ORG. WE HONEST FOR ALL CUSTOMERS.

4. Publication

Most of the PhD consultancy services will end their services in Paper Writing, but our PhDservices.org is different from others by giving guarantee for both paper writing and publication in reputed journals. With our 18+ year of experience in delivering PhD services, we meet all requirements of journals (reviewers, editors, and editor-in-chief) for rapid publications. From the beginning of paper writing, we lay our smart works. PUBLICATION IS A ROOT FOR PHD DEGREE. WE LIKE A FRUIT FOR GIVING SWEET FEELING FOR ALL SCHOLARS.

5. No Duplication

After completion of your work, it does not available in our library i.e. we erased after completion of your PhD work so we avoid of giving duplicate contents for scholars. This step makes our experts to bringing new ideas, applications, methodologies and algorithms. Our work is more standard, quality and universal. Everything we make it as a new for all scholars. INNOVATION IS THE ABILITY TO SEE THE ORIGINALITY. EXPLORATION IS OUR ENGINE THAT DRIVES INNOVATION SO LET’S ALL GO EXPLORING.

Client Reviews

I ordered a research proposal in the research area of Wireless Communications and it was as very good as I can catch it.

I had wishes to complete implementation using latest software/tools and I had no idea of where to order it. My friend suggested this place and it delivers what I expect.

It really good platform to get all PhD services and I have used it many times because of reasonable price, best customer services, and high quality.

My colleague recommended this service to me and I’m delighted their services. They guide me a lot and given worthy contents for my research paper.

I’m never disappointed at any kind of service. Till I’m work with professional writers and getting lot of opportunities.

- Christopher

Once I am entered this organization I was just felt relax because lots of my colleagues and family relations were suggested to use this service and I received best thesis writing.

I recommend phdservices.org. They have professional writers for all type of writing (proposal, paper, thesis, assignment) support at affordable price.

You guys did a great job saved more money and time. I will keep working with you and I recommend to others also.

These experts are fast, knowledgeable, and dedicated to work under a short deadline. I had get good conference paper in short span.

Guys! You are the great and real experts for paper writing since it exactly matches with my demand. I will approach again.

I am fully satisfied with thesis writing. Thank you for your faultless service and soon I come back again.

Trusted customer service that you offer for me. I don’t have any cons to say.

I was at the edge of my doctorate graduation since my thesis is totally unconnected chapters. You people did a magic and I get my complete thesis!!!

- Abdul Mohammed

Good family environment with collaboration, and lot of hardworking team who actually share their knowledge by offering PhD Services.

I enjoyed huge when working with PhD services. I was asked several questions about my system development and I had wondered of smooth, dedication and caring.

I had not provided any specific requirements for my proposal work, but you guys are very awesome because I’m received proper proposal. Thank you!

- Bhanuprasad

I was read my entire research proposal and I liked concept suits for my research issues. Thank you so much for your efforts.

- Ghulam Nabi

I am extremely happy with your project development support and source codes are easily understanding and executed.

Hi!!! You guys supported me a lot. Thank you and I am 100% satisfied with publication service.

- Abhimanyu

I had found this as a wonderful platform for scholars so I highly recommend this service to all. I ordered thesis proposal and they covered everything. Thank you so much!!!

Related Pages

Phd Research Topics In Image Processing

Phd Projects For Computer Science Students

Phd System Development Services In India

Phd Projects In Information Forensics Security

Phd Projects In Biomedical

Phd Projects Academic Writing Help

Phd Projects In Information Security

Phd Projects In Matlab

Phd Code Development Services In India

Phd Projects In Artificial Neural Network

Phd Projects In Matlab Simulink

Phd Projects In Brain Computer Interface

Phd Projects In Medical Image Processing

Phd Projects In Biomedical Engineering

Phd Projects Assistance

IMAGES

  1. (PDF) Research on the Application of Face Recognition System

    face recognition phd thesis

  2. Top Quality Facial Emotion Recognition Thesis

    face recognition phd thesis

  3. The framework of face recognition system.

    face recognition phd thesis

  4. Thesis On Face Recognition

    face recognition phd thesis

  5. Top 10 PhD Research Topics in Face Recognition [Innovative Ideas]

    face recognition phd thesis

  6. Frontal View Human Face Detection and Recognitionsumitha/papers/sumitha

    face recognition phd thesis

COMMENTS

  1. A Face Recognition Method Using Deep Learning To Identify Mask And

    facial recognition is known as the Karhunen-Loeve method. It is the most thoroughly studied. method for face recognition, with its main usability being a reduction in the dimensionality of the image. This method was first applied for face recognition and then subsequently used for facial. reconstruction.

  2. PDF Toward Ethical Applications of Artificial Intelligence: Understanding

    UNDERSTANDING CURRENT USES OF FACIAL RECOGNITION TECHNOLOGY AND ADVANCING BIAS MITIGATION A Thesis submitted to the Faculty of the Graduate School of Arts and Sciences ... thesis primarily aims to advance promising bias mitigation strategies. The key recommendations made are: 1) education for users and increased engagement by ...

  3. PDF Deep Learning based Facial Expression Recognition and its Applications

    EXPRESSION RECOGNITION AND ITS APPLICATIONS A thesis submitted to Brunel University London for the degree of Doctor of Philosophy (Ph.D.) By Asim Jan ... From the department of Electronic and Computer Engineering August, 2017 . I Abstract Facial expression recognition (FER) is a research area that consists of classifying the human emotions

  4. (PDF) DEVELOPMENT OF A FACE RECOGNITION SYSTEM

    A face recognition system is designed, implemented and tested in this thesis study. The system utilizes a combination of techniques in two topics; face detection and recognition.

  5. PDF Face Recognition by Means of Advanced Contributions in Machine Learning

    FACE RECOGNITION BY MEANS OF ADVANCED CONTRIBUTIONS IN MACHINE LEARNING PhD Thesis Dissertation by VIRGINIA ESPINOSA DURÓ Submitted to the Universitat Politècnica de Catalunya in partial fulfillment of the requirements for the PhD degree Supervised by Dr. Enric Monte Moreno and Dr. Marcos Faúndez-Zanuy

  6. PDF Deep Face Recognition in the Wild

    2.1 Overview of face recognition. It consists of face localization and face feature embedding.The white dots are detected 5 landmarks. The red dots forms the mean face which is composed of left eye center, right eye center, nose tip, left corner of the mouth, and right corner of the mouth. The white landmarks are

  7. Face Recognition: An Engineering Approach

    Face Recognition: An Engineering Approach - San Jose State University

  8. PDF Face Recognition in Video Surveillance from a Single Reference Sample

    Face Recognition in Video Surveillance from a Single Reference Sample Through Domain Adaptation by SAMAN BASHBAGHI THESIS PRESENTED TO ÉCOLE DE TECHNOLOGIE SUPÉRIEURE IN PARTIAL FULFILLMENT FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Ph. D. MONTREAL, SEPTEMBER 27, 2017 ÉCOLE DE TECHNOLOGIE SUPÉRIEURE UNIVERSITÉ DU QUÉBEC Saman Bashbaghi, 2017

  9. PDF Unraveling Representations for Face Recognition: from Handcrafted to

    This is to certify that the thesis titled "Unraveling Representations for Face Recognition: from Handcrafted to Deep Learning" being submitted by Gaurav Goswami to the Indraprastha In- stitute of Information Technology Delhi, for the award of the degree of Doctor of Philosophy, is

  10. PDF University of Windsor Scholarship at UWindsor

    This online database contains the full-text of PhD dissertations and Masters' theses of University of Windsor students from 1954 forward. These documents are made available for personal study and research purposes only, ... Face recognition, as the main biometric used by human beings, has become more popular for the last twenty years ...

  11. PDF "Automatic Facial Expression Recognition"

    Automatic Facial Expression Recognition (AFER) system that applies a machine learning algorithm based on Deep Convolutional Neural Networks (DCNNs) with the aim of correctly classifying seven facial expressions (namely surprise, happiness, sadness, fear, anger, disgust, and neutral). The DCNN module and the AFER system were built

  12. Towards three-dimensional face recognition in the real

    Due to the natural, non-intrusive, easily collectible, widespread applicability, machine-based face recognition has received significant attention from the biometrics community over the past three decades. Compared with traditional appearance-based (2D) face recognition, shape-based (3D) face recognition is more stable to illumination variations, small head pose changes, and varying facial ...

  13. (PDF) AN ADVANCED FACE DETECTION AND RECOGNITION

    Doctoral dissertation, Kyoto University 3952 (1973): 83-97. [9] Brunelli, Roberto, and Tomaso Poggio. ... Face recognition is done and the car is directed towards the vacant parking slot.

  14. PDF Face Perception and Recognition, on the Fringe of Human Awareness

    In this thesis, we hypothesised that the fringe-P3 method could be successfully used to detect intrinsic salience of familiar faces, even when there was no task associated with the stimuli. Using experiments, we investigated the sensitivity of the ERP-based RSVP paradigm, to infer recognition of celebrity, as well as, lecturer faces,

  15. PDF Real-Time Face Detection and Recognition Based on Deep Learning

    rotation. Therefore, face recognition based on deep learning can greatly improve the recognition speed and compatible external interference. In this thesis, we use convolutional neural networks (ConvNets) for face recognition, the neural networks have the merits of end-to-end, sparse connection and weight sharing.

  16. (PDF) Face Recognition: A Literature Review

    The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. ... Egypt, and do ing PhD research on ...

  17. PDF What'S in Your Face? Discrimination in Facial Recognition Technology

    Thesis Advisor: Mark MacCarthy, Ph.D. ABSTRACT. This paper examines the discrimination in facial recognition technology (FRT) and how to mitigate it in the contexts of academia, product development, and industrial. research. FRT is the automation of the processing of human faces. In recent years, given.

  18. Sensors

    Over the past few decades, interest in theories and algorithms for face recognition has been growing rapidly. Video surveillance, criminal identification, building access control, and unmanned and autonomous vehicles are just a few examples of concrete applications that are gaining attraction among industries. Various techniques are being developed including local, holistic, and hybrid ...

  19. PHD Thesis On Face Recognition

    Phd Thesis on Face Recognition - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The document discusses some of the major challenges faced by PhD candidates in writing their thesis on the complex topic of face recognition, including extensive research requirements, data collection and analysis hurdles, developing a theoretical framework, writing and formatting ...

  20. PDF 2010:040 CIV MASTER'S THESIS Face Recognition in Mobile Devices

    MASTER'S THESIS Face Recognition in Mobile Devices Mattias Junered Luleå University of Technology MSc Programmes in Engineering Media Technology Department of Computer Science and Electrical Engineering Division of Signal Processing 2010:040 CIV - ISSN: 1402-1617 - ISRN: LTU-EX--10/040--SE.

  21. Face Recognition Thesis PHD

    Face Recognition Thesis Phd - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The document discusses writing a PhD thesis on face recognition technology. It notes that such a thesis requires an in-depth exploration of the theoretical underpinnings, applications, and trends of face recognition. It also acknowledges the challenges of the large volume of literature ...

  22. Face Recognition Thesis

    3D Face Arrangement & Displaying. Multi-Level Face Tracing & Grouping. Micro & Dynamic Facial Expression Recognition. The above listed are some of the interesting face recognition thesis ideas, you can also select one of the mentioned ideas to develop your own thesis by considering these ideas as a reference.

  23. PhD Projects in Face Recognition

    Face Recognition is a "strong security-aware area" in the DIP field. PhD Projects in Face Recognition is our wide research service. As a matter of fact, it consists of "erudite environs" for PhD/MS scholars. It happens due to the ever-rising need in the defense system. At first, it captures one's bio proof of identity as facial ...