Face recognition is basically a biometric software application to analyze, identify or verify the digital image of the person. This is extremely useful in the military, airports, universities, ATM, and banks, etc, for security purposes. We have a plethora of techniques or algorithms which we use for face recognition feature extraction. In this blog post, we will see how we use deep learning to extract features in face recognition.
Basics of Face Recognition
Let’s see how a human does recognize the faces. The first thing that a person does recognize is the eyes, cheekbones, nose, mouth, and eyebrows, as well as the texture and colour of the skin. So, In the meantime, our brain processes the face as a whole and is able to identify a person. The brain relates the processed picture with the internal averaged pattern and finds characteristic differences.
In the same manner, we built a face recognition system which carries out these differences. Therefore the process includes steps like localization, normalization, feature extraction, and recognition.
Also, read | How to use Reinforcement Learning in Marketing?
What is Deep Learning?
Deep learning is the subset of Machine Learning, that deals with unstructured or unlabelled data by creating a network that’s capable of learning unsupervised learning. Convolutional Neural Network is an example of a deep learning model. It is the feedforward deep network, in order to process information, the first several layers are sparsely connected.
Face recognition Process flow have a sequence of related steps:
- We need to look at the frame of video or an image to find all the faces on it.
- After extracting faces, we have to focus on each face. We need to determine despite having poor lighting or tilting and turn of the face.
- Now, we have to highlight the unique characteristics of the face to differentiate it from people like the shape of the nose, size of the eye, scar, etc.
- At last, we compare these unique characteristics of the face with the pattern you have to determine the name of a person.
Deep Learning Implementation for Face Recognition feature extraction
Some feature extraction methods frequently used in the earliest literature still provide a baseline for other techniques. Such as the application of principal component analysis and extraction of eigenfaces. Feature extraction techniques for face recognition has two classes: ‘Holistic approaches’ and ‘Local approaches’.
In the Holistic approach, the face image is converted into a single vector. This results in the application of an algorithm to the whole image face. Whereas, in the local approach, the face image is reformed to a set of vectors. We use selected regions to extract them.
There is some deep learning network based on face recognition:
- Facebook in 2014 developed DeepFace, a facial recognition system. This has an accuracy of 97%, and based on deep convolutional neural networks which identify human faces in digital images.
- The DeepID, or Deep hidden IDentity features described by YI Sun, et al. in their paper titled “Deep Learning Face Representation from Predicting 10,000 Classes”. The system supports both identification and identification tasks by training via contrastive loss.
- Omkar Parkhi et al developed VGGFace. In their paper titled “Deep Face Recognition”. The system is based on a very large dataset and used to train a very deep CNN model for face recognition.
- a researcher at Google developed FaceNet in 2015. The FaceNet model has third-party open-source model implementation and availability of pre-trained models is there too.