21

Implementing Face Recognition in 2 minutes

 4 years ago
source link: https://towardsdatascience.com/implementing-face-recognition-in-2-minutes-f1a8a0a859c1?gi=23c5fc68b8ed
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

A detailed guide to Face Recognition using Python

Ef2yEnf.png!web

Image source

Did you find yourself surprised when you found out Facebook automatically tags your friends in your pictures? It was then we realized that machines have become much smarter nowadays. Our face conveys a lot of information including our emotional state. It has also shown its application in the domain of security where facial recognition helps in the identification of any human.

In human beings, it is the temporal lobe of the brain which is responsible for recognition of faces. In machine learning, machine are fed a lot of images of face which the machine trains. When a test image is given, the model tries to match it with existing images.

Following are the use cases where Face Recognition is used:

  • Fraud Detection for Passports and Visas — The Australian passport office uses automatic face recognition software and according to reports, this system is more efficient than a human detecting frauds.

6JbY73U.png!web

Image source
  • Identification of Criminals — The law enforcement agencies have started implementing facial recognition system to improve the quality of investigation.
  • Track attendance — Few organizations use facial recognition system to track attendance of their employees.

There have been many implementations in this domain especially DeepFace by Facebook and FaceNet by Google.

But for a beginner who is trying to implement it for some project or want to explore facial recognition need not learn the advance implementation. Here is my attempt to implement face recognition in a simple way using local binary pattern.

Local Binary Pattern ( LBP )

Local Binary Patterns Histogram algorithm was proposed in 2006. It is based on local binary operator. It is widely used in facial recognition due to its computational simplicity and discriminative power.

Why Local Binary Pattern?

  • LBPH Method is one of the best performing texture descriptor.
  • The LBP operator is robust against monotonic gray scale transformations.
  • In LBPH each image is analyzed independently, while the eigenfaces and fisherfaces method looks at the dataset as a whole.
  • LBPH method will probably work better than fisherfaces in different environments and light conditions.It also depends on our training and testing data sets.
  • LBPH can recognise both side and front faces.

Since this blog focuses on implementation of LBP, we will move straight to the implementation.

Implementation

Importing Libraries:

We will first import the libraries required for our code.

import cv2  
import os   
import matplotlib.pyplot as plt 
import numpy as np 

%matplotlib inline

cv2 — LBPH algorithm is a part of opencv

os — For specifying directory path

matplotlib — For visualizing images

numpy — For passing labels in train function

%matplotlib inline — Visualizing the plots in jupyter

Detect Face:

def detect_faces_predict(img):    gray_image = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)    lbp_cascade_face = 
    cv2.CascadeClassifier('lbpcascade_frontalface.xml')
    
    faces = lbp_cascade_face.detectMultiScale(gray_image, 
    scaleFactor = 1.2, minNeighbors = 5)    if(len(faces)==0):
        return None,None
    (x,y,w,h) = faces[0]
    
    return gray_image[y:y+w, x:x+h] , faces[0]

Time to break down the function

In OpenCV, face detection is implemented on gray images. We need to convert our coloured image into gray image first

gray_image = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

The brain behind LBP classifier is the knowledge file. Since we are detecting face, the frontal face knowledge file is required.

Link to the xml file: https://github.com/opencv/opencv/blob/master/data/lbpcascades/lbpcascade_frontalface.xml

lbp_cascade_face = 
    cv2.CascadeClassifier('lbpcascade_frontalface.xml')

We use the detectMultiScale method to detect faces in the gray image. This returns coordinates of the detected face.

faces = lbp_cascade_face.detectMultiScale(gray_image, 
    scaleFactor = 1.2, minNeighbors = 5)

If no face is found, we return None, else we return the coordinates of the face detected.

if(len(faces)==0):
        return None,None
    (x,y,w,h) = faces[0]

To check if the function is working properly, we will pass a image and then plot the detected face.

detect_image = sample(lbp_cascade_face, test_image)
plt.imshow(convertToRGB(detect_image))

3UvqMrR.png!web

Image Source : https://www.businessinsider.in/

Looks like it is working properly, Let’s move on to data preparation step

Prepare the data:

Before writing the code, let me explain how our directory looks

yEnA3aj.png!web

Photo by author

‘Training-data’ folder contains 2 folders 0 and 1

Note : The folder name will act as a label for our model which is the reason why we are naming it 0 and 1

0 folder contains Barack Obama pictures whereas 1 folder contains David Beckham pictures.

def prepare_data(data_path):    
    faces = []
    labels = [] 
    dir_path = os.listdir(data_path)
    for dir_name in dir_path:
        label = int(dir_name)
        sub_dir_path = data_path + "/" + dir_name
        sub_dir = os.listdir(sub_dir_path)
        for image_dir_path in sub_dir:
            image_path = sub_dir_path + "/" + image_dir_path
            img = cv2.imread(image_path)
            face , rect = detect_faces(img)
            #check if face is present
            if face is not None:
                #append the faces and label
                faces.append(face)
                labels.append(label)
    return faces , labelsfaces , labels = prepare_data('training-data')

Let’s break down the code

We first initialize the faces and labels to null.

faces = []
labels = []

We get the required directory path using os

dir_path = os.listdir(data_path)

Since we have two sub folders, we have to loop in the directory

for dir_name in dir_path:

We now find the sub directory path using os

sub_dir_path = data_path + "/" + dir_name
sub_dir = os.listdir(sub_dir_path)

In order to read every image inside the folders, we enter the loop inside each sub directory. We then give each image it’s path and then read it.

for image_dir_path in sub_dir:
            image_path = sub_dir_path + "/" + image_dir_path
            img = cv2.imread(image_path)

We will now get our detected face by passing image in detect_faces function

face , rect = detect_faces_predict(img)

Check if face is present then return face and the label

if face is not None:
        faces.append(face)
        labels.append(label)

Before defining our model, let’s prepare our data

faces , labels = prepare_data('training-data')

Face Recognizer Model:

recognizer = cv2.face.LBPHFaceRecognizer_create()

The LBPHFaceRecognizer_create function helps in creating face recognizer model from LBPH

It is time to train our model on our data

recognizer.train(faces , np.array(labels))

Let’s give our label appropriate names. Since 0 folder contains Barack Obama pictures and 1 folder contains David Beckham pictures.

name = [ 'Barack Obama' , 'David Beckham']

Face Prediction:

We have reached the final step of our code

def predict_face(test_image):
    img = test_image.copy()
    face , rect = detect_faces(img)
    label = recognizer.predict(face)
    final_name = name[label[0]]
    (x,y,w,h) = rect    final_image = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    cv2.rectangle(final_image,(x,y),(x+w,y+h),(0,255,0),2)
    cv2.putText(final_image,final_name,(x,y-
    5),cv2.FONT_HERSHEY_PLAIN,1.2,(0,255,0),2)
    
    return final_image

Let’s break down the function

We first create a copy of our test_image and pass that image in our detect_faces function to get image and coordinates of the rectangle

img = test_image.copy()
face , rect = detect_faces(img)

We used the predict function of our recognizer to predict the face and return the predicted label. We then use our name list to get appropriate names.

label = recognizer.predict(face)
final_name = name[label[0]]

cv2.rectangle function helps in plotting rectangle around the detected face and cv2.putText function helps in writing down the label name above the rectangle

final_image = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
cv2.rectangle(final_image,(x,y),(x+w,y+h),(0,255,0),2)
cv2.putText(final_image,final_name,(x,y-
    5),cv2.FONT_HERSHEY_PLAIN,1.2,(0,255,0),2)

Time to test the model using a test image

test_image = cv2.imread('test.jpg')
final_image = predict_face(test_image)
plt.figure(figsize=(10,10))
plt.imshow(final_image)

Ef2yEnf.png!web

Image Source : https://www.businessinsider.in /

We have successfully implemented Face Recognition using LBPH algorithm.

You can find the above code in my GitHub repository:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK