The service allows to perform face detection, landmarks detection (nose, eyes and
lips) and calculate an embedding vector usefull for face tracking and reidentification.
Face bounding box represented by 4 float numbers: x and y position of the left top corner, width
and height of the box. Numbers are normalized relative to the image size.
Facial landmarks represented by x and y coordinates of 5 keypoints: nose, left and right eyes, left and
right corners of lips. Numbers are normalized relative to the image size.
Note: Algorithm may detect multiple faces and sets of landmarks in an image.
Embeddings are represented by vector of 512 float values.
Depending on query parameters, embedding vector may be calculated for each face detected in an image or for the whole image
(usefull for cases when an image consists of a single cropped face).
METHOD | URL | DESCRIPTION |
---|---|---|
GET | https://face-detection14.p.rapidapi.com/v1/version |
Get service version. |
POST | https://face-detection14.p.rapidapi.com/v1/results |
Perform image analysis and get results. |
Returns an actual version of the service in format vX.Y.Z
where X is the
version of API.
PROPERTY | DESCRIPTION |
---|---|
Endpoint | https://face-detection14.p.rapidapi.com/v1/version |
Method | GET |
Query parameters | โ |
POST parameters | โ |
Examples
Request:
$ curl -X 'GET' 'https://face-detection14.p.rapidapi.com/v1/version'
Response:
v1.5.0
Performs actual image analysis and responds with results.
PROPERTY | DESCRIPTION |
---|---|
Endpoint | https://face-detection14.p.rapidapi.com/v1/results |
Method | POST |
Query parameters | detection ,embeddings |
POST parameters | image , url |
Query parameter: detection
The detection
query parameter allows client to enable/disable face & landmarks detection.
If client passes True
value then the service will perform detection.
Otherwise if client passes False
value then the image will be treated as a cropped face-only image and service will skip detection.
In this case landmarks will be skipped in response.
Detection is enabled by default.
Query parameter: embeddings
The embeddings
query parameter allows client to enable/disable embeddings calculation.
If client passes True
value then the service will perform calculation of embeddings for each face detected in an image.
Otherwise if client passes False
value then embeddings will not be calculated.
Embeddings calculation is disabled by default.
Note: If you want to skip face detection and just calculate embeddings for the whole image, use
the following combination of flags: detection=False&embeddings=True
.
Response schema
For responses with 200
HTTP code, the type of response is JSON object with the
following schema:
{
"results": [
{
"status": {
"code": ...,
"message": ...
},
"name": ...,
"md5": ...,
"page": ...,
"width": ...,
"height": ...,
"entities": [
{
"kind": "objects",
"name": "face-detector",
"objects": [
{
"entities": [
{
"kind": "vector",
"name": "face-embeddings",
"vector": ...
},
{
"kind": "classes",
"name": "face",
"classes": {
"face": ...
}
},
{
"kind": "namedpoints",
"name": "face-landmarks",
"namedpoints": {
"left-eye": [...],
"right-eye": [...],
"nose-tip": [...],
"mouth-left-corner": [...],
"mouth-right-corner": [...]
}
}
],
"box": ...
},
...
]
}
]
}
]
}
Primary fields:
Name | Type | Description |
---|---|---|
results[].status.code |
string |
Status code of image processing: ok or failure . |
results[].status.message |
string |
Human readable explanation for status of image processing. |
results[].name |
string |
Original image name passed in request (e.g. my_image.jpg ). |
results[].md5 |
string |
MD5 sum of original image passed in request. |
results[].page |
int |
Optinal page number (presented for multipage inputs only). |
results[].width |
int |
Optinal image width (presented for valid inputs only). |
results[].height |
int |
Optinal image height (presented for valid inputs only). |
results[].entities[].objects |
array |
Array of detected faces. |
results[].entities[].objects[].box |
array |
Face bounding box defined by 4 float values. |
results[].entities[].objects[].entities[name=face-embeddings].vector |
array |
Optional embedding vector represented by array with 512 float values. |
results[].entities[].objects[].entities[name=face].classes.face |
float |
Face detection confidence. |
results[].entities[].objects[].entities[name=face-landmarks].namedpoints |
object |
Optional object that represents facial landmarks. |
Other fields that are not described above always have the same values.
Passing image
Image can be passed by posting regular โmultipart form dataโ in two alternative ways:
image
fieldurl
fieldImage must be a regular JPEG or PNG image (with or without transparency) or PDF file.
Usually such images have extensions: .jpg
, .jpeg
, .png
, .pdf
. In case of PDF
each page will be converted to PNG image and processed separately.
The service checks input file by MIME type and accepts the following types:
image/jpeg
image/png
application/pdf
The size of image file must be less than 16Mb
.
Examples
Request:
curl -X 'POST' 'https://face-detection14.p.rapidapi.com/v1/results' -F 'image=@faces.jpg'
Response:
{
"results": [
{
"status": {
"code": "ok",
"message": "Success"
},
"name": "face.jpg",
"md5": "39cefc732e7e0c2b1b5f6a783dc87141",
"width": 1024,
"height": 768,
"entities": [
{
"kind": "objects",
"name": "face-detector",
"objects": [
{
"box": [
0.5379649347853466,
0.2885976606043589,
0.19544756039146483,
0.2706363619501567
],
"entities": [
{
"kind": "classes",
"name": "face",
"classes": {
"face": 0.9990811347961426
}
},
{
"kind": "namedpoints",
"name": "face-landmarks",
"namedpoints": {
"left-eye": [
0.5917292404174804,
0.37885522842407227
],
"right-eye": [
0.6897146463394165,
0.39089276552200314
],
"nose-tip": [
0.6375282120704651,
0.45772279739379884
],
"mouth-left-corner": [
0.5896168756484985,
0.4839122676849365
],
"mouth-right-corner": [
0.6723905473947525,
0.49339886665344235
]
}
}
]
}
]
}
]
}
]
}
When client sends an image that can not be processed for some reason(s), the service responds with 200
code and returns JSON object in the same format as the format for successful analysis. In this case, the results[].status.code
will have failure
value and results[].status.message
will contain relevant explanation.
Example of possible reasons for the issue:
Example response for corrupted image:
{
"results": [
{
"status": {
"code": "failure",
"message": "Can not load image."
},
"name": "file.jpg",
"md5": "d41d8cd98f00b204e9800998ecf8427e",
"entities": []
}
]
}
Request size is limited by approximately 32Mb
.
When client sends request that exceeds this limit, the service responds with 413
code.
The typical reason for exceeding this limit is overly large image.
Taking into account additional HTTP overhead, we strongly recommend to not pass image files of size more than 16Mb
.
Example response for too big request:
Error: Request Entity Too Large
Your client issued a request that was too large.
When client sends a request without an image and url, the service responds with 422
code and returns JSON object.
Example response for request without image or url:
{"detail": "Missing image or url field."}