The API uses a computer vision model to estimate human pose and returns a list of 31 detected points, 3 projected points and 16 angles formed by different joints on the body.
The API takes an URL or dataURI to an image, and optionally, the plane in with the person in the image appears.
Example:
{
"url": "[url-to-image/dataURI from image]",
"plane": "coronal"
}
The posible values of plane are as follows:
The response has the following structure:
{
"success": true,
"angles": [
{
"name": "Left Elbow-Shoulder-Hip",
"angle": 30.2049,
}
...
],
"landmarks": [
{
"point": "nose",
"type": "detected",
"visibility": 0.9996007084846497,
"x": 0.4148496687412262,
"y": 0.0948917344212532,
"z": -0.2626589834690094
}
...
],
}
Angles:
Landmarks:
Points detected by the model
Points projected based on other points
Angles calculated by the API
The API can be complemented by a JavaScript library that allows to draw the output over an HTML canvas.
Examples of how to consume the API from a web and how to use the utilities can be found here: