How to Build a Facial Recognition App (A Step-by-Step Guide & Tutorial) [using Python & Flask]

Have you ever seen how modern phones recognize the faces of their owners? Or maybe you are interested in the way phones or cameras can detect faces of people in real-time?

This is possible due to computer vision (CV), a large subfield of machine learning. In this tutorial, we will show how to use computer vision inside your web application.

The Facial Recognition API

There are many facial recognition APIs out there. For this project, we’ll just be using one.

The Face Recognition and Face Detection API (by Lambda Labs) is a convenient tool that enables you to integrate computer vision into your web and mobile applications. Some apps can use faces as a main or additional step in the authentication process. Some entertainment apps may require the positions of eyes, mouth, and nose on the images to perform funny distortions. There are also applications for photo editing where these aspects are important.

Using the Face Recognition and Face Detection API is an easier approach than training computer vision models on your own from scratch.

Connect to the Face Recognition API

How to get an API Key & Use the Facial Recognition API

RapidAPI is the world’s largest API marketplace, with over 10,000 APIs available.

In addition, APIs are categorized by different topics and use cases.

So, how do you work with the Face Recognition and Face Detection API on RapidAPI?

  1. To start, sign up for a free developer account on RapidAPI
  2. Next, navigate to the Face Recognition and Face Detection API.
  3. Finally, subscribe to the API. Open the Pricing tab on the page with the API. Then select your desired plan (hint: there’s a Basic Plan that allows for 1000 free requests/month).

After choosing your plan, check the details one more time and click the Subscribe button:

Now you should be able to test and use the Face Recognition and Face Detection API.

Connect to the Face Recognition API

About the API

First, let’s take a look at the API endpoints:

There are two groups of endpoints (Album Management and Detect & Recognize) and a standalone GET View Entry endpoint.

The GET View Entry endpoint is for checking how many training images with the particular label are in the given album. The album name, album key, and the ID of the entry are the required input parameters for this endpoint. So, the request can have the following format:

The response is a JSON object with the name of the album, the entry ID (the label you searched for) and the count of images with this entry ID in this album:

The Album Management group of endpoints has four endpoints.

They are designated for

  • Album Creation
  • Viewing
  • Training
  • Rebuilding

Create Album and Train Album are POST endpoints, while View Album and Rebuild Album are GET endpoints.

This group of endpoints is important for those who want to use the recognition feature of the API. To use it, you need to create the album, fill it with labeled images, and then recognize faces on the new images (to verify whether they contain faces that are in the existing album).

The Detect & Recognize group of endpoints has two endpoints.

The POST Recognize endpoint is used to recognize people from the given album on the image.

In order to perform recognition, you have to specify the name of the album and its key. The new image(s) can be passed as files or as URLs.

In the response, the endpoint returns the IDs of the recognized faces and also provides information about the face location on the image along with coordinates for eyes, nose, and mouth.

All these can be found in the tags field:

The last endpoint we want to describe is POST Detect.

Its goal is to detect positions of faces on images. You need to specify the file(s) or URL(s) with images. The endpoint returns information about the position of faces on the images and coordinates for eyes, mouth, and nose for each detected face.

Now, after you’ve become familiar with the API, let’s build a simple Flask application.

Connect to the Face Recognition API

How to Implement Face Recognition into an App

We’ll be building an application that will be able to recognize admins via photos from a web camera. If there is a high level of confidence that the person who wants to access the admin panel is one of the admins, then access will be allowed.

We will create the album with labeled faces of the app’s administrators, in which the system will be trained on these images.

  1. When somebody tries to access the admin panel, the app will ask to take a photo using the web camera.
  2. The photo will be analyzed by the Face Recognition and Face Detection API.
  3. If the API thinks that the face from the taken image belongs to one of the admins, the visitor will be redirected to the admin panel.

*Note: This application is only for the demonstration of the abilities of the API. It cannot be used as a layer for authentication because it doesn’t meet mandatory security requirements.

For the purposes of the tutorial, there is no need to have a lot of admins and photos. So, we will use three photos for each of the four celebrities: Johnny Depp, Cristiano Ronaldo, Megan Fox, and Keira Knightley.

The dataset looks like this:

The first thing we need to do is to create a new album. Let’s call it ADMINS.

To make requests to RapidAPI you should use your unique RapidAPI API key. You can find it in the Header Parameters section on the page of any API.

import requests
import json

url = "https://lambda-face-recognition.p.rapidapi.com/album"
headers = {
    'x-rapidapi-host': "lambda-face-recognition.p.rapidapi.com",
    'x-rapidapi-key': "your_rapid_api_key",
    'content-type': "application/x-www-form-urlencoded"
    }
payload = "album=ADMINS"

response = requests.post(url, headers=headers, data=payload)
resp = json.loads(response.text)

In the result, you should get the album key, which you should save and use later in the requests to the given album:

{
   'album': 'ADMINS',
   'albumkey': 'album_key', 
   'msg': "Please put this in a safe place and remember it, you'll need it!"
}

Now, we can upload images to the album. Let’s do it using cURL:

curl --request POST 
--url https://lambda-face-recognition.p.rapidapi.com/album_train 
--header 'content-type: multipart/form-data' 
--header 'x-rapidapi-host: lambda-face-recognition.p.rapidapi.com' 
--header 'x-rapidapi-key: your_rapid_api_key' 
--form album=ADMINS 
--form albumkey=album_key 
--form entryid=Depp 
--form files=@depp1.jpg

As you can see, we specified the URL of the endpoint, the headers, the name of the album and its key, the “entryid” parameter, and the image file.

In other words, we have labeled this image with the “Depp” tag.

The response is the JSON object where we can see how many images with this tag are currently in the album.

After we have uploaded the second and third images for Depp, we have the following response:

{
    "album": "ADMINS", 
    "rebuild": false, 
    "entryid": "Depp", 
    "image_count": 3
}

Similarly, we uploaded all the images that we had for all categories (Depp, Ronaldo, Fox, and Knightley).

Let’s view the album using the request from Python:

url = "https://lambda-face-recognition.p.rapidapi.com/album"
querystring = {"album":"ADMINS","albumkey":"album_key"}

headers = {
    'x-rapidapi-host': "lambda-face-recognition.p.rapidapi.com",
    'x-rapidapi-key': "your_rapid_api_key"
    }

response = requests.get(url, headers=headers, params=querystring)
print(response.text)

The output:

We’ve ensured that all four entries are in the album. Now we need to rebuild the album:

url = "https://lambda-face-recognition.p.rapidapi.com/album_rebuild"
querystring = {"album":"ADMINS","albumkey":"album_key"}
headers = {
    'x-rapidapi-host': "lambda-face-recognition.p.rapidapi.com",
    'x-rapidapi-key': "your_rapid_api_key"
    }

response = requests.get(url, headers=headers, params=querystring)
print(response.text)

The output:

Now we are ready to recognize faces in new photos. So, let’s create an app.

Connect to the Face Recognition API

Building a Facial Recognition website using Flask

The website will consist of two pages:

  1. the login page
  2. and the admin panel.

Only authorized users can access the admin panel.

On the top level, our Flask application has the main.py and config.py files and the app folder.

Here is the main.py file content:

from app import app

The main.py file is the entry point which we execute when we want to run the application.

Here is the config.py file content:

import os

class Config(object):
    SECRET_KEY = os.environ.get('SECRET_KEY') or 'secret_key_facerecognition'

This file usually represents different configuration parameters. As our app is quite simple, we want to set just a single parameter – the secret key. The SECRET_KEY is needed to keep the client-side sessions secure in Flask.

The app folder contains the api_logic.py, __init__.py, routes.py files and static and templates folders.

The __init__.py file defines the Flask application and inserts configurations to it.

Here is the content of this file:

from flask import Flask
from config import Config

app = Flask(__name__)
app.config.from_object(Config)

from app import routes

The static folder includes three files: webcam.js, webcam.swf, and custom.js. The first two files represent the webcamjs JavaScript library. We need to use this library to be able to take pictures with users’ webcams. So, we have simply copied these files from the webcamjs repository.

The custom.js file is our JavaScript code needed to take the photo and upload it to the server. We will return back to this file and explore it later.

The templates folder includes files with HTML templates for our website. There are three files: base.html, login_form.html, admin_panel.html.

The base.html file is the most general template in which we want to embed other templates. Let’s take a look at it:

<html>
    <head>
        <script src="/static/webcam.js"></script>
        <script src="/static/custom.js"></script>
      
      {% if title %}
      <title>{{ title }} - Admin panel</title>
      {% else %}
      <title>Admin panel</title>
      {% endif %}
    </head>

    <body>
        <div><a href="{{ url_for('index') }}">Login to admin panel</a></div>
        <hr>
        <div id="content"></div>           
    </body>
</html>

HTML code has two sections: head and body. In the head section, we define the title of the page and the JavaScript scripts (custom.js and webcam.js, that are located in the static folder). In the body section at the top, we have a navigation bar. Below the navigation bar, we have some space where other templates can be embedded (<div> content).

Let’s look at the login_form.html template:

<h3 style="color: red">{{message}}</h3>

<h2>Login form</h2>
<div>
    <p>To login, please enter your username. Also, take your photo using the web camera.</p>
    <p>Press the <b>Snap</b> button to take a photo. You will see it in the box at the bottom of the page.</p>
    <p>You can press <b>Snap</b> several times until you will think that you take a suitable photo.</p>
    <p>When you are ready, press the <b>Submit</b> button. Our system will check whether you can access the admin panel.</p>
</div>

<form id="loginForm" action="" method="post" novalidate>
    {{ form.hidden_tag() }} 
    <p>
        {{ form.username.label }}<br>
        {{ form.username(size=32) }}
    </p> 
    <input type="button" >

It has instructions for users and the form with id=”loginForm”. The form has the username field and the Submit button. After clicking on the button, the upload() function should be executed. Keep in mind that the form has hidden_tag to protect against CSRF attacks. 

Besides the form, we created the box for the live video from a web camera, the button Snap and the box where the result of the snapshot should be displayed. The users will be able to check the photo before deciding whether to retake it or to send it to the server.

Next, let’s explore the  custom.js file:

function ShowCam() {
    Webcam.set({
        width: 320,
        height: 240,
        image_format: 'jpeg',
        jpeg_quality: 100
    });
    Webcam.attach('#my_camera');
}

// fetch the login_form.html template and embed it into content div
function loadForm() {
    var xhr= new XMLHttpRequest();
    xhr.open('GET', '/login_form/');
    xhr.onreadystatechange= function() {
        if (this.readyState!==4) return;
        if (this.status!==200) return;
        document.getElementById('content').innerHTML = this.responseText;
    };
    xhr.send(); 
}

// bind loadForm() and ShowCam() functions to the corresponding events
window.addEventListener("DOMContentLoaded", loadForm);
window.addEventListener("load", ShowCam);

function snap() {
    Webcam.snap( function(data_uri) {
        // display results in page
        document.getElementById('results').innerHTML = 
        '<img id="image" src="'+ data_uri+'"/>';
      } );      
}

function upload() {
    var photo = document.getElementById('image').src;
    var form = document.getElementById('loginForm');
    photo = dataURItoBlob(photo)
    var formData = new FormData(form);
    formData.append("file", photo);
    var xmlhttp = new XMLHttpRequest();
    xmlhttp.onreadystatechange = function()
        {
          if(this.readyState == 4 && this.status == 200) {
            document.getElementById('content').innerHTML = this.responseText;
            ShowCam();
          } else {
            document.getElementById('content').innerHTML = "ERROR";
          }
        } 
    xmlhttp.open("POST", "/", false);
    xmlhttp.send(formData);    
}

function dataURItoBlob(dataURI) {
    // convert base64/URLEncoded data component to raw binary data held in a string
    var byteString;
    if (dataURI.split(',')[0].indexOf('base64') >= 0)
        byteString = atob(dataURI.split(',')[1]);
    else
        byteString = unescape(dataURI.split(',')[1]);

    // separate out the mime component
    var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];

    // write the bytes of the string to a typed array
    var ia = new Uint8Array(byteString.length);
    for (var i = 0; i < byteString.length; i++) {
        ia[i] = byteString.charCodeAt(i);
    }
    return new Blob([ia], {type:mimeString});
}

The showCam() function sets up the camera with defined parameters and binds it to the element with id=”my_camera”. The snap() function serves for taking the image and inserting it to the <div> with id=”results”. The upload() function is for uploading photos from the <div> to the server. It also sends the user’s input into the login form to the server. This function uses the dataURItoBlob() function to convert images from the data URI format to Blob format.

The admin_panel.html is a simple template that we want to render after a successful login:

<h2>Welcome to the admin panel, {{user}}</h2>
<div>
    <p>Here you can manage the website</p>
</div>

Now let’s explore one of the most interesting parts of the application – the files routes.py and api_logic.py.

Let’s look at the routes.py file first:

from app import app
from flask import request, render_template
from flask_wtf import FlaskForm
from wtforms import StringField
from app.api_logic import execute_request, check_confidence
import json

from PIL import Image

class LoginForm(FlaskForm):
    username = StringField('Username')

@app.route('/', methods=['GET', 'POST'])
def index():
    login_form = LoginForm()
    if request.method == 'POST':
        username = request.form['username']
        file = request.files['file']
        image = Image.open(file)
        image.save('photo.jpg')

        json_obj = execute_request()
        flag = check_confidence(json_obj)

        if flag:
            return render_template('admin_panel.html', user=username)
        else:
            message = "Your photo was not recognized as admin's photo. Please, try again."
            login_form = LoginForm()
            return render_template('login_form.html', title='Login', form=login_form, message=message)

    return render_template('base.html', title='Login', form=login_form, message="")

@app.route('/login_form/', methods=['GET'])
def login_form():
    login_form = LoginForm()
    return render_template('login_form.html', title='Login', form=login_form, message="")

The index() function renders the base.html template if there is a GET request. If it receives the POST request, it should extract the username and the photo from the form.

Then it saves the photo with the name photo.jpg.

Connect to the Face Recognition API
The next step is to use functions execute_request() and check_confidence() from the api_logic.py to make a request to the API and analyze the response.

If the check_confidence() returns True, we render the admin_panel.html template. Otherwise, we render the login_form.html template with the message for the user to retry to log in.

The functions for interacting with the Face Recognition and Face Detection API via the RapidAPI platform are located in the api_logic.py file:

import json, requests, subprocess, shlex

def execute_request():
    bashCommand = '''curl --request POST --url https://lambda-face-recognition.p.rapidapi.com/recognize 
    --header 'content-type: multipart/form-data' 
    --header 'x-rapidapi-host: lambda-face-recognition.p.rapidapi.com' 
    --header 'x-rapidapi-key: your_rapid_api_key' 
    --form albumkey=album_key
    --form album=ADMINS --form files=@photo.jpg'''

    args = shlex.split(bashCommand)
    process = subprocess.Popen(args, stdout=subprocess.PIPE)
    output, error = process.communicate()
    json_obj = json.loads(output)
    return json_obj

def check_confidence(json_obj):
    if json_obj['status'] == 'success':
        if len(json_obj['photos'][0]['tags']) == 0:
            return False
        confidence = json_obj['photos'][0]['tags'][0]['uids'][0]['confidence']
        flag = False
        if confidence > 0.5:
            flag = True
        return flag

The execute_request() function sends the request with the saved photo to the API. This function returns the JSON object with the response from the API.

The check_confidence() function analyzes the response. If the probability that one of the admins is depicted in the photo is high enough (more than 50%) we return True in the flag variable. Obviously, this is a simplified approach, but for the tutorial, we can go this way.

Connect to the Face Recognition API

Testing the Facial Recognition Website

To be able to run the application we need to export the FLASK_APP variable:

export FLASK_APP=main.py

Then we can run the Flask application with the following command:

flask run

Once you see the following output in the Terminal, you can go to the web address specified there to test the application:

Here is the home page of the web application opened in browser:

You can see the live stream from the web camera in the box under the Submit button. To take a photo, press the Snap button. Don’t forget to fill the Username form field. The snapshot will be displayed below:

If you like the photo, you can press the Submit button to send it to the server.

Obviously, I’m not Johnny Depp (or any other person from those that are in the ADMINS album).

That’s why the API will not recognize me as an admin and the application refuses to let me enter the admin panel:

So, this part of the application works as expected.

We don’t have any of the celebrities here to test another scenario.

That’s why we need to cheat a little bit. Let’s take the photo of Johnny Depp that was used during training and place it in the application folder with the name photo.jpg.

Also, let’s replace this statement in the routes.py file:

image.save('photo.jpg')

with the following statement:

image.save('photo1.jpg')

In other words, our application will save the snapshot with the name photo1.jpg, but the execute_request() function from the api_logic.py will send the photo of Johnny Depp to the API.

Now we need to restart the application. After clicking on the Submit button we should enter the admin panel:

Connect to the Face Recognition API

Conclusion

In this tutorial, we introduced you to the Face Recognition and Face detection API.

To show how it can be used, we have created a simple web application using Flask.

We hope you now understand the basics of working with this API. The number of use cases for face recognition and face detection is vast.

With this API, the process of integration of these useful features into your apps will become a lot easier.

5 / 5 ( 2 votes )
RapidAPI Staff

The RapidAPI staff consists of various writers in the RapidAPI organization. Check out our medium team page here. For support, please email us at support@rapidapi.com.

Share
Published by

Recent Posts

How to use WordPress with React (WordPress React API Tutorial)

WordPress WordPress claims to be, "the world’s most popular website builder" based on the statistic…

1 week ago

How to use an E-mail API with JavaScript

Introduction This tutorial will show you how to use JavaScript to call an API to…

1 week ago

How to Document your API

Documentation is an essential part of any API, and this is what we're going to…

2 weeks ago

How to use the Google Play Store API to Search App Details

Modern mobile phones are replacing many different devices and services. And the bigger part of…

2 weeks ago

The 10 Chrome Flags That Can Transform Your Browsing Experience

Being one of the most popular web browsers out there, Google Chrome has a reputation…

3 weeks ago

How to use the Google Translate API with Ruby on Rails

Google, mostly known for its search engine and extremely popular email service, Gmail, also provides…

3 weeks ago