You may often face the task of face verification. This is usually a verification of faces in a personโs photo and in his passport, or, for example, a comparison of photos in a passport and a driverโs license.
In this tutorial youโll learn how to easily solve a one-to-one face verification task with Face Analysis API.
We use Python in the tutorial, but you can use any programming language to interact with Face Analysis API.
The Face Analysis API allows you to perform face detection, face landmarks detection (eyes, nose, lips) and calculate vector embedding for face tracking and identification. It is the presence of vector embedding that will allow you to compare two images.
You can use Face Analysis directly at several URLs:
https://demo.api4ai.cloud
https://api4ai.cloud
Or use Face Analysis with RapidAPI.
The demo version has a limit on the number of requests.
Learn more at https://api4.ai/docs/face-analysis.
You can find code examples at https://gitlab.com/api4ai/examples/face-analyzer.
You can send simple request to face detection and calculating embedding vector according to API documentation. To get embedding vector just add embeddings=True
to query parameters.
There will be stored face bounding box (box), face landmarks (face-landmarks) and embedding vector (face-embeddings) in a response in JSON format.
The next step is to calculate similarity. In order to do this you need to follow these steps:
a is a constant L2-distance value meaning similarity is 50%.
First of all, you need to send request to Face Analysis API.
We use request
library to make HTTP requests.
with pathlib.Path('/path/to/image.jpg').open('rb') as f:
res = requests.post('https://demo.api4ai.cloud/face-analyzer/v1/results',
params={'embeddings': 'True'},
files={'image': f.read()})
Do not forget to specify
embeddings=True
in query parameters to get embedding vector.
As mentioned earlier, there will be a lot of different information about face detection stored in JSON format.
Response comes as a string type, so you have to convert it to dict using json
module and extract embedding vector from it:
res_json = json.loads(res.text)
if res_json['results'][0]['status']['code'] == 'failure':
raise RuntimeError(res_json['results'][0]['status']['message'])
embedding = res_json['results'][0]['entities'][0]['objects'][0]['entities'][2]['vector']
Attention! Sometimes your request may fail even if you get
200
code in the response. Just check results[].status.code in the response json is notfailure
.
The next step is to calculate L2-distance and convert it to similarity of faces using equation above.
dist = math.sqrt(sum([(i-j)**2 for i, j in zip(embedding1, embedding2)]))
a = 1.23
similarity = math.exp(dist ** 7 * math.log(0.5) / a ** 7)
A face similarity threshold allows us to determine the minimal similarity percent when we define faces as similar:
threshold = 0.8
if similarity >= threshold:
print("It's the same person.")
else:
print('There are different people on the images.')
You can adust threshold parameter to suit it your case. If it is important for you to reduce the number of false positives that this is the same person - increase the threshold. If you need to identify only clearly different people then decrease the threshold.
Now you know how to get face similarity and can create a script that checks that the same person is in two images.
#! /usr/bin/env python3
"""Determine that the same person is in two photos."""
from __future__ import annotations
import argparse
import json
import math
from pathlib import Path
import requests
from requests.adapters import HTTPAdapter, Retry
API_URL = 'https://demo.api4ai.cloud'
ALLOWED_EXTENSIONS = ['.jpg', '.jpeg', '.png']
def parse_args():
"""Parse command line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument('image1', help='Path or URL to the first image.')
parser.add_argument('image2', help='Path or URL the second image.')
return parser.parse_args()
def get_image_embedding_vector(img_path: str):
"""Get face embedding using Face Analysis API."""
retry = Retry(total=4, backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504])
session = requests.Session()
session.mount('https://', HTTPAdapter(max_retries=retry))
if '://' in img_path:
res = session.post(API_URL + '/face-analyzer/v1/results',
params={'embeddings': 'True'}, # required parameter if you need to get embeddings
data={'url': str(img_path)})
else:
img_path = Path(img_path)
if img_path.suffix not in ALLOWED_EXTENSIONS:
raise NotImplementedError('Image path contains not supported extension.')
with img_path.open('rb') as f:
res = session.post(API_URL + '/face-analyzer/v1/results',
params={'embeddings': 'True'}, # required parameter if you need to get embeddings
files={'image': f.read()})
res_json = json.loads(res.text)
if 400 <= res.status_code <= 599:
raise RuntimeError(f'API returned status {res.status_code}'
f' with text: {res_json["results"][0]["status"]["message"]}')
if res_json['results'][0]['status']['code'] == 'failure':
raise RuntimeError(res_json['results'][0]['status']['message'])
return res_json['results'][0]['entities'][0]['objects'][0]['entities'][2]['vector']
def convert_to_percent(dist):
"""Convert embeddings L2-distance to similarity percent."""
threshold_50 = 1.23
return math.exp(dist ** 7 * math.log(0.5) / threshold_50 ** 7)
def main():
"""Entrypoint."""
# Parse command line arguments.
try:
args = parse_args()
# Get embeddings of two images.
emb1 = get_image_embedding_vector(args.image1)
emb2 = get_image_embedding_vector(args.image2)
# Calculate similarity of faces in two images.
dist = math.sqrt(sum([(i-j)**2 for i, j in zip(emb1, emb2)])) # L2-distance
similarity = convert_to_percent(dist)
# The threshold at which faces are considered the same.
threshold = 0.8
print(f'Similarity is {similarity*100:.1f}%.')
if similarity >= threshold:
print("It's the same person.")
else:
print('There are different people on the images.')
except Exception as e:
print(str(e))
if __name__ == '__main__':
main()
Also we added command line arguments parsing and Face Analysis API version check.
Letโs try this script with two photos of Jared Leto.
Just run the script with this command in your terminal:
python3 ./main.py 'https://storage.googleapis.com/api4ai-static/rapidapi/face_verification_tutorial/leto1.jpg' 'https://storage.googleapis.com/api4ai-static/rapidapi/face_verification_tutorial/leto2.jpg'
The output we get at version v1.16.2
:
Similarity is 99.2%.
It's the same person.
Now letโs compare many different actors: Jensen Ackles, Jared Padalecki, Dwayne Johnson, Kevin Hart, Scarlett Johansson and Natalie Portman.
86% | 9.6% | 22% | 23.2% | 33.3% | 13.1% | |
49.3% | 89.2% | 56.7% | 69.6% | 43.4% | 9.5% | |
55.8% | 45.4% | 88.2% | 43% | 21.8% | 8.3% | |
56.5% | 62.6% | 33.1% | 87.7% | 51% | 27% | |
17.4% | 8.5% | 28.1% | 31.7% | 79.9% | 23.5% | |
5.8% | 12.8% | 18.5% | 9.9% | 21.2% | 83.5% |
Letโs run the script:
python3 ./main.py 'https://storage.googleapis.com/api4ai-static/rapidapi/face_verification_tutorial/ackles1.jpg' 'https://storage.googleapis.com/api4ai-static/rapidapi/face_verification_tutorial/padalecki1.jpg'
The output:
Similarity is 41.7%.
There are different people on the images.
Letโs find how it works with the same person in different poses.
100% | 62.9% | 46.8% | |
62.9% | 100% | 58.1% | |
46.8% | 58.1% | 100% |
Now you learned how to use Face Analysis API to solve specific task with one-to-one face verification. Go to Face Analysis API Docs and Face Analysis API code examples, learn more and make your own apps!