Teach you to use Python to build an open platform for face recognition

Using Python to build an open platform for [face recognition]

It has been more than 60 years since the emergence of artificial intelligence, and the full explosion of deep learning in recent years has pushed it to a more prosperous stage. Especially in 2016, Google’s AIphaGo swept the chess world, making artificial intelligence a wave of attention among the general public. Today, artificial intelligence remains one of the hottest research fields. Many giant companies and start-ups have entered the field of artificial intelligence, trying to find new breakthroughs. And artificial intelligence wants to change society, not only at the theoretical and conceptual level, but more importantly, to achieve commercialization and scene-based implementation.

At present, many technology companies provide open platforms based on AI, and provide AI services in the form of Web, such as face recognition, OCR recognition, speech recognition, etc. Taking face detection as an example, the user uploads the photo to be detected through a specific api interface, and then the web server performs face detection on the photo and returns the detection result to the user. Actual AI algorithms often require complex configuration and high server performance to perform algorithm deduction. Using Web deployment allows developers to configure and maintain the server environment without paying attention to the configuration and performance of the user’s PC. In addition, the update of the AI ​​algorithm only needs to be modified on the server, and the use of the Web architecture is suitable for the rapid deployment and update of AI products in the production environment.

In order to reduce the learning difficulty of readers, this article only uses the ready-made face detection algorithm provided by OpenCV for detection. In actual cases, more advanced face detection algorithms can be used to improve the detection accuracy according to the CPU or GPU performance of the server. For example The MTCNN face detection algorithm based on deep learning can be used. This article aims to provide readers with a web development direction that combines artificial intelligence. Readers who are interested in turning to artificial intelligence can start from this content and learn basic AI algorithm reasoning and deployment skills, while more complex face recognition or AI Algorithm development requires readers to refer to other artificial intelligence books or papers.

01. Face recognition background construction

This article uses the face detection algorithm provided by OpenCV to build the face detection background. OpenCV is an open source computer vision application library. The project was initiated by the Russian team of Intel’s R&D Center. It is open source and free. The design goal is to achieve real-time and efficient computer vision tasks. It is a cross-platform computer vision library. Since the day of development, it has developed rapidly, and has won the support and contribution of many enterprises and scholars in the industry. Because it is BSD open source, it can be freely used in scientific research and commercial applications. Most modules in OpenCV are implemented based on C++, and a few of them are implemented based on C language. The algorithm is highly optimized and has high implementation efficiency, which is very suitable for production environments. Currently, the SDK provided by OpenCV already supports application development in languages ​​such as C++, Java, and Python.

First, download and install the OpenCV development package for Python:

import numpy as np         # Matrix operations 
import urllib               # url parsing 
import json                 # json strings use 
import cv2                  # opencv package 
import os                   # Execute operating system commands 
from django.views.decorators.csrf import csrf_exempt   # Cross-site verification 
from django.http import JsonResponse                      # json string response

The speed of installing the above Python packages online in China is relatively slow, and the installation is often unsuccessful. Readers can use offline installation, place the above installation package in a certain path, and then use the following command to install:

face_detector_path = "serviceApp\\haarcascade_frontalface_default.xml" 
face_detector = cv2.CascadeClassifier(face_detector_path)   # Generate face detector

@csrf_exempt   # Used to avoid cross-site request attacks
def facedetect(request):
    result = {}

    if request.method == "POST" :   # Specifies that the client use POST to upload images 
        if request.FILES.get( "image" , None) is not None:   # Read image 
            img = read_image(stream=request.FILES[ " image" ])
         else :
            result.update({
                "#faceNum" : - 1 ,
            })
            return JsonResponse(result)

        if img.shape[2] == 3:
            img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)   # color image to grayscale

        #For face detection 
        values ​​= face_detector.detectMultiScale(img,
                                                scaleFactor=1.1,
                                                minNeighbors = 5 ,
                                                minSize=(30, 30),
                                                flags=cv2.CASCADE_SCALE_IMAGE)

        # Encapsulate the detected face detection keypoint coordinates 
        values ​​= [( int (a), int (b), int (a + c), int (b + d))
                   for (a, b, c, d) in values ​​]
        result.update({
            "#faceNum": len(values),
            "faces": values,
        })
    return JsonResponse(result)

Next, develop the background view processing function. Since API-based interfaces need to be developed, most of the functional code will be written in the views.py file. When the background receives the user’s request, the specified view function reads the picture uploaded by the user, then calls the OpenCV face detection algorithm for detection, and finally returns the detection result to the user in the form of a JSON string.

Open the views.py file under the serviceApp application and import some Python libraries in the header:

def read_image(stream=None):
    if stream is not None:
        data_temp = stream.read()
    img = np.asarray(bytearray(data_temp), dtype="uint8")
    img = cv2.imdecode(img, cv2.IMREAD_COLOR)
    return img

In order to be able to perform face detection, a specific face detector needs to be used. In general, it needs to be trained by using machine learning algorithms. The OpenCV library used in this article comes with an efficient face detector, which can be directly used without further training. Just use it. Find the haarcascade_frontalface_default.xml file in the OpenCV installation directory (path: Python installation directory+\Lib\site-packages\cv2\data). The xml file is essentially a configuration file, which is used to save the trained face feature detector model parameters. You only need to import this file to perform face detection when using it. In order to facilitate the use of the project, the xml file is placed in the project serviceApp application directory.

Continue to edit the views.py file and add the facedetect function as follows:

import cv2, requests
url = "http://localhost:8000/serviceApp/facedetect/" 

# upload image and detect
tracker = None
imgPath = "face.jpg" #image   path
files = {
    "image": ("filename2", open(imgPath, "rb"), "image/jpeg"),
}

req = requests.post(url, data=tracker, files=files).json()
print("Get information: {}".format(req))

# Display the detection result box on the image
img = cv2.imread(imgPath)
for (w, x, y, z) in req["faces"]:
    cv2.rectangle(img, (w, x), (y, z), (0, 255, 0), 2)

cv2.imshow("face detection", img)
cv2.waitKey(0)

●Face detection configuration file import: firstly import the configuration file path externally and store the path in the variable face_detector_path; then use cv2.CascadeClassifier to create the face detector face_detector, the face detector needs to be created by passing in the face detection configuration The file face_detector_path is implemented;

●Face detection view processing function: first import the @csrf_exempt decorator before the function, otherwise the api interface cannot be used. The result variable is used to store the final returned result. The api interface stipulates that the client must submit information by POST, and the view processing function obtains the image data from the FILES of the request request;

● Image reading method: The open platform encapsulates the image in request.FILES and marks it with the key “image”; then uses a custom read_image function to read the image, the definition and detailed code of this function will be given later out; the read image data is stored in the temporary variable img;

●Image conversion: Since OpenCV needs to convert RGB color images into grayscale images in the process of detecting faces, the cv2.cvtColor function provided by OpenCV is used to realize image conversion;

●Face detection: use the face detector and call the corresponding detectMultiScale method to perform face detection on the image, and the detection result values ​​store the coordinate value of each detected face frame;

●JSON encapsulation and return: Since the results returned by each face detection are returned in the upper left corner, ordinate and the length and width of the detection frame, in order to facilitate later drawing, the detection results are converted into the upper left corner and lower right corner coordinates; the final detection The result is encapsulated into a JSON string and returned using the JsonResponse function;

The custom image reading function read_image is given below:

<link rel="stylesheet" href="{% static 'css/codemirror.css' %}">
<script src="{% static 'js/codemirror.js' %}"></script>
<script src="{% static 'js/python.js' %}"></script>
<style type="text/css">
    .CodeMirror {
        border-top: 1px solid black;
        border-bottom: 1px solid black;
    }
</style>

The read_image function is used to realize image reading based on data stream. By default, the uploaded image is a color image, which is read in binary mode, and then decoded into OpenCV image data by the cv2.imdecode function.

So far, the face detection view processing function has been completed in the backend. In order to use this processing function to perform face detection, the corresponding mapping route needs to be defined for this function. Open the urls.py file under the serviceApp application and add a route in the urlpatterns field:

<div><textarea id="code" >
    Write Python code here
</textarea></div>

Run the project after saving all modifications. So far, the face recognition background service has been successfully opened. The next section will explain how to execute the Python script locally to call the api interface.

02. Local script test

This part is called through a local Python script to implement the face detection function based on Web Api. To be able to send HTTP requests using Python locally, the requests library needs to be downloaded and installed:

<script>
    var editor = CodeMirror.fromTextArea(document.getElementById("code"), {
        mode: {
            name: "python",
            version: 3,
            singleLineStringErrors: false
        },
        lineNumbers: true,
        indentUnit: 4,
        tabMode: "shift",
        matchBrackets: true
    });
</script>

For the convenience of project integration, place the local call script in the test folder under the project root directory, name it faceDetectDemo.py, and then place a test image face.jpg in the same directory as the file for face detection. Edit the faceDetectDemo.py file and add the following code:

import cv2, requests
url = "http://localhost:8000/serviceApp/facedetect/" 

# upload image and detect
tracker = None
imgPath = "face.jpg" #image   path
files = {
    "image": ("filename2", open(imgPath, "rb"), "image/jpeg"),
}

req = requests.post(url, data=tracker, files=files).json()
print("Get information: {}".format(req))

# Display the detection result box on the image
img = cv2.imread(imgPath)
for (w, x, y, z) in req["faces"]:
    cv2.rectangle(img, (w, x), (y, z), (0, 255, 0), 2)

cv2.imshow("face detection", img)
cv2.waitKey(0)

●Face detection api interface: the background server in the development stage is opened at http://localhost:8000, and then according to the url definition rules, the application name serviceApp and the corresponding interface name facedetect are added to be the final api interface;

●Sending data: First, use the open function to read the image content according to the local image path and encapsulate it in the files variable, and then use the requests.post function to send the request; the returned request data is converted into a JSON string, and the string information is printed and viewed through the print function ;

●Result display: In order to explicitly check the effect of face detection, the obtained face detection frame can be output on the original image. This is achieved by reading each face detection frame and then using OpenCV’s rectangle function to draw a rectangular frame on the original image;

The final detection effect is shown in Figure 1.

▍Figure 1 Face detection renderings

When developing the face recognition open platform, one thing needs special attention, because each response requires the main server to perform a face detection operation, and the face detection algorithm itself will be more time-consuming and resource-consuming than other types of conventional operations. When the number of concurrent visits is too large, the website will crash, which is commonly referred to as traffic overload. Traffic overload includes two aspects: one is overloaded access, that is, the backend host has limited performance and cannot withstand excessive traffic; the other is that there is a performance problem in the website code, which slows down the system and causes the website service to crash. There are usually two fixes for the above problems:

1) In order to quickly access the website and allow users to access it quickly and normally, the most effective means is to restrict access. For example, to limit the frequency of access, this adjustment should be dynamic. Doing so ensures service availability, but also sacrifices access for some users. Under normal circumstances, how much traffic the server can support depends on the technical personnel to do the test before the system business goes online, and do data support in advance;

2) If time permits, if the back-end service has expansion conditions, you can analyze the access data during the crash, and then expand the service according to the analysis results, and then gradually open the access restrictions;

The focus of the face recognition open platform developed in this article is to guide you to familiarize yourself with the basic steps of building an artificial intelligence web interface. In the actual operation process, it is necessary to comprehensively consider and design the web architecture in combination with server performance, user groups, algorithm efficiency, etc. A stress test is required before, and interested friends can learn relevant knowledge by themselves.

03. Front-end description page

This part will improve the front-end page of the face recognition open platform, mainly to provide users with an instruction page to show users how to use the open platform interface. First, create a new file platform.html in the templates folder of the serviceApp application. The head and tail of the file are basically the same as the docList.html file. You only need to modify the corresponding page name.

The main part of the page is mainly explanatory text, which is written with conventional HTML tags. It mainly expounds the basic interface information of the face recognition open platform, and gives an example of the interface call based on Python. It can be seen that in the demo code part, the syntax highlighting function suitable for Python format is implemented, which allows users to browse and copy the code conveniently. In order to achieve this function, it is developed here by integrating the CodeMirror plugin. CodeMirror is a very powerful code editing plug-in that provides a very rich API. Its core is based on JavaScript, and it can highlight online code in real time. It is worth pointing out that this plug-in is not a subsidiary product of a rich text editor. Essentially a base library for an online code editor.

In this section, you need to master the method of highlighting code in the page. First download the CodeMirror plug-in package, download address: [https://codemirror.net/] . The plug-in package contains various code use cases. Since this article mainly introduces the interface calls based on the Python language, the Python part is introduced. Open the index.html file with a browser, and then click the example corresponding to the Python language to jump to the Python example page, and you can develop by viewing the source code of the page. In actual use, you only need to import the necessary js and css files.

Next, enter the specific development stage. First you need to import the necessary js and css files from the CodeMirror plugin package:

<link rel="stylesheet" href="{% static 'css/codemirror.css' %}">
<script src="{% static 'js/codemirror.js' %}"></script>
<script src="{% static 'js/python.js' %}"></script>
<style type="text/css">
    .CodeMirror {
        border-top: 1px solid black;
        border-bottom: 1px solid black;
    }
</style>

The codemirror.css, codemirror.js and python.js files can be found in the CodeMirror plug-in package. Copy them to the css and js subfolders under the static folder of the hengDaProject project to implement the call.

When used, the display and editing of the code is realized through the HTML tag