For this tutorial you would need to install these python packages:
You can install it by pip:
pip install numpy opencv-python requests
or
pip3 install numpy opencv-python requests
First of all we have to import required packages:
import cv2
import json
import requests
import numpy as np
Next let’s define some constants:
img_address = "https://promity.com/wp-content/uploads/2021/05/image-0111a.jpg"
api_key = "XXX"
api_address = "https://faceanalysis.p.rapidapi.com/emotions/process_url"
Where:
Now we can send request to API:
headers = {
"X-Rapidapi-Key": api_key,
"Content-Type": "application/json"
}
params = {'img_url': img_address}
response = requests.post(api_address, headers=headers, params=params)
and convert response from json format to ‘easier to use’ dictionary:
response_dict = json.loads(response.text)
Now, let’s make some visualization. At first we have to download our image from Internet:
resp_img = requests.get(img_address, stream=True)
arr = np.asarray(bytearray(resp_img.content), dtype=np.uint8)
img = cv2.imdecode(arr, -1)
Get height and width of image, which will be needed later for visualization:
img_height, img_width, _ = img.shape
Let’s define some contants needed for visualization in opencv:
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
fontColor = (0,255,0)
lineType = 2
Now iterate through all detections and put bounding boxes around detected faces and emotions:
for det in response_dict['detections']:
crop = det['crop']
if crop['score'] < 0.6:
continue
x1 = int(crop['x1'] * img_width)
x2 = int(crop['x2'] * img_width)
y1 = int(crop['y1'] * img_height)
y2 = int(crop['y2'] * img_height)
img = cv2.rectangle(img, (x1, y1), (x2, y2), (0,250,0), 3)
emotion = max(det['emotions'], key=det['emotions'].get)
img = cv2.putText(img, emotion + "_" + "{:1.2f}".format(det['emotions'][emotion]),
(x1, y2 - 10),
font,
fontScale,
fontColor,
lineType)
Value of crop[‘score’] tells us, how face detection algorithm was sure, that there is face in given region. Maximum of this value is 1.0, which means that our algorithm was 100% sure. In other words, it’s probability. In this example, if it’s less than 0.6, we ommit this detection.
OK, let’s look on our image:
cv2.imshow('Promity Rapidapi demo', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Or save it to disk:
cv2.imwrite('promity_emotion_endpoint.png', img)
import cv2
import json
import requests
import numpy as np
img_address = "https://promity.com/wp-content/uploads/2021/05/image-0111a.jpg"
api_key = "XXX"
api_address = "https://faceanalysis.p.rapidapi.com/emotions/process_url"
headers = {
"X-Rapidapi-Key": api_key,
"Content-Type": "application/json"
}
params = {'img_url': img_address}
response = requests.post(api_address, headers=headers, params=params)
response_dict = json.loads(response.text)
resp_img = requests.get(img_address, stream=True)
arr = np.asarray(bytearray(resp_img.content), dtype=np.uint8)
img = cv2.imdecode(arr, -1)
img_height, img_width, _ = img.shape
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
greenColor = (0, 255, 0)
lineType = 2
print(response.text)
for det in response_dict['detections']:
crop = det['crop']
if crop['score'] < 0.9:
continue
x1 = int(crop['x1'] * img_width)
x2 = int(crop['x2'] * img_width)
y1 = int(crop['y1'] * img_height)
y2 = int(crop['y2'] * img_height)
img = cv2.rectangle(img, (x1, y1), (x2, y2), greenColor, lineType)
emotion = max(det['emotions'], key=det['emotions'].get)
img = cv2.putText(img, emotion,
(x1, y2 - 10),
font,
fontScale,
greenColor,
lineType)
cv2.imshow('Promity Rapidapi demo', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('promity_emotion_endpoint.png', img)
First of all we have to import required packages:
import cv2
import json
import requests
Next let’s define some constants:
img_path = 'example_image.jpg'
api_key = "XXX"
api_address = "https://faceanalysis.p.rapidapi.com/emotions/process_file"
Where:
files = {'image_file': open(img_path, 'rb')}
headers = {
"x-rapidapi-key": api_key
}
response = requests.post(api_address, files=files, headers=headers)
and convert response from json response to dictionary:
json_dict = json.loads(response.text)
Now read image and get it’s height and width, which will be used during visualization process:
img = cv2.imread(img_path)
img_height, img_width, _ = img.shape
Let’s define some contants needed for visualization in opencv:
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
fontColor = (0, 255, 0)
lineType = 2
Now iterate through all detections and put bounding boxes around detected faces and predicted age:
for det in json_dict['detections']:
crop = det['crop']
if crop['score'] < 0.6:
continue
x1 = int(crop['x1'] * img_width)
x2 = int(crop['x2'] * img_width)
y1 = int(crop['y1'] * img_height)
y2 = int(crop['y2'] * img_height)
img = cv2.rectangle(img, (x1, y1), (x2, y2), (0,250,0), 3)
emotion = max(det['emotions'], key=det['emotions'].get)
img = cv2.putText(img, emotion + "_" + "{:1.2f}".format(det['emotions'][emotion]),
(x1, y2 - 10),
font,
fontScale,
fontColor,
lineType)
Value of crop[‘score’] tells us, how face detection algorithm was sure, that there is face in given region. Maximum of this value is 1.0, which means that our algorithm was 100% sure. In other words, it’s probability. In this example, if it’s less than 0.9, we ommit this detection.
OK, let’s look on our image:
cv2.imshow('Promity Rapidapi demo', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Or save it to disk:
cv2.imwrite('promity_facial_emotion_endpoint.png', img)
import cv2
import json
import requests
img_path = 'example_image.jpg'
api_key = "XXX"
api_address = "https://faceanalysis.p.rapidapi.com/emotions/process_file"
files = {'image_file': open(img_path, 'rb')}
headers = {
"x-rapidapi-key": api_key
}
response = requests.post(api_address, files=files, headers=headers)
json_dict = json.loads(response.text)
img = cv2.imread(img_path)
img_height, img_width, _ = img.shape
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
fontColor = (0, 255, 0)
lineType = 2
for det in json_dict['detections']:
crop = det['crop']
if crop['score'] < 0.6:
continue
x1 = int(crop['x1'] * img_width)
x2 = int(crop['x2'] * img_width)
y1 = int(crop['y1'] * img_height)
y2 = int(crop['y2'] * img_height)
img = cv2.rectangle(img, (x1, y1), (x2, y2), (0,250,0), 3)
emotion = max(det['emotions'], key=det['emotions'].get)
img = cv2.putText(img, emotion + "_" + "{:1.2f}".format(det['emotions'][emotion]),
(x1, y2 - 10),
font,
fontScale,
fontColor,
lineType)
cv2.imshow('Promity Rapidapi demo', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('promity_facial_emotion_endpoint.png', img)