81 face key point detection

Hits: 0

[Dlib] has been run before , mainly to detect 68 face key points, and the corresponding 68 face key points are as follows:

For the use of 68 points of dlib, please refer to: /xingchenbingbuyu/article/details/51116354

The demo mentioned in this article is an extension based on dlib;

Project address: https://github.com/codeniko/shape_predictor_81_face_landmarks

This project realizes the detection of 81 face feature points for the given image (pic or video stream frame). The training process is similar to the 68 points of dlib, and the author adds 13 points of the forehead. Improved accuracy for head detection or image processing for that area. Such as the operation of the author adding a hat to the forehead~

[placing a hat on someone’s head.]

For the extraction of these 13 points, the author refers to the eos project of patrikhuber: https://github.com/codeniko/eos .

Author and used Surrey Face Model 

The following is the author’s original words:

“I made the modifications here, then ran it on the entire ibug large database of images to overwrite each image’s 68 landmark coordinates with my 81 landmark coordinates. From here, the training for the [shape] predictor model can proceed using http://dlib.net/train_shape_predictor.py.html  ”

You can watch the demo video of the author running the project   https://www.youtube.com/watch?v=mDJrASIB1T0

The following is the distribution of 81 feature points, of which 0~67 are 68 points of dlib, and 68~80 are 13 points added by the author;

After downloading the project code, switch to the project path, I use the anaconda environment, and run the webcam_record.py script in it. The instructions are as follows:

import sys
import os
import dlib
import glob
from skimage import io
import numpy as np
import cv2


cap = cv2.VideoCapture( 0 )     #Read the camera 
fourcc = cv2.VideoWriter_fourcc(* 'XVID' )    #Generate the encoding form of the video file

out = cv2.VideoWriter( 'output.avi' ,fourcc, 20.0 , ( 1280 , 720 )) #output video parameter information

predictor_path = 'shape_predictor_81_face_landmarks.dat' #Official    trained model file

detector = dlib.get_frontal_face_detector()   #Use the frontal_face_detector that comes with dlib as our face extractor
predictor = dlib.shape_predictor(predictor_path) 
#2. Use the officially provided model to build a feature extractor

while(cap.isOpened()):
    ret, frame = cap.read()
    frame = cv2.flip(frame, 1 )   #image horizontal flip 
    dets = detector(frame, 0 ) #3. Use detector for face detection dets is the returned result 
    for k, d in enumerate(dets):
        shape = predictor(frame, d) #5. Use predictor for face key point recognition 
        landmarks = np.matrix([[px, py] for p in shape.parts()])
         for num in range(shape.num_parts) :
            cv2.circle(frame, (shape.parts()[num].x, shape.parts()[num].y), 3 , ( 0 , 255 , 0 ), -1 )   #6. Draw feature points 
    cv2 .imshow( 'frame' , frame)
    out.write(frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        print("q pressed")
        break


cap.release()
out.release()

cv2.destroyAllWindows()

Running the above code will capture the camera in real time and perform key point detection on the face. The results are as follows:

The real-time detection is really good. It runs completely on CPU. It is also good for small image face detection, but if the lateral angle of the face is slightly larger, it will not be detected~

Guess who is the female star in my phone~~~

The content of the script source code is as follows (same as the dlib basis):

webcam_record.py

import sys
import os
import dlib
import glob
from skimage import io
import numpy as np
import cv2


cap = cv2.VideoCapture( 0 )     #Read the camera 
fourcc = cv2.VideoWriter_fourcc(* 'XVID' )    #Generate the encoding form of the video file

out = cv2.VideoWriter( 'output.avi' ,fourcc, 20.0 , ( 1280 , 720 )) #output video parameter information

predictor_path = 'shape_predictor_81_face_landmarks.dat' #Official    trained model file

detector = dlib.get_frontal_face_detector()   #Use the frontal_face_detector that comes with dlib as our face extractor
predictor = dlib.shape_predictor(predictor_path) 
#2. Use the officially provided model to build a feature extractor

while(cap.isOpened()):
    ret, frame = cap.read()
    frame = cv2.flip(frame, 1 )   #image horizontal flip 
    dets = detector(frame, 0 ) #3. Use detector for face detection dets is the returned result 
    for k, d in enumerate(dets):
        shape = predictor(frame, d) #5. Use predictor for face key point recognition 
        landmarks = np.matrix([[px, py] for p in shape.parts()])
         for num in range(shape.num_parts) :
            cv2.circle(frame, (shape.parts()[num].x, shape.parts()[num].y), 3 , ( 0 , 255 , 0 ), -1 )   #6. Draw feature points 
    cv2 .imshow( 'frame' , frame)
    out.write(frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        print("q pressed")
        break


cap.release()
out.release()

cv2.destroyAllWindows()

You need to install some libraries beforehand, use pip

pip cmake,opencv-python, dlib

cmake is used to compile the dlib library~

enjoy !

You may also like...

Leave a Reply

Your email address will not be published.