The Wotcher API is a robust and simple tool designed to uphold user safety and well-being in digital environments. Through simple API calls, users can analyze text content in real-time and receive immediate feedback concerning potential safety concerns, including self-harm, sexual intent, violence, and hate speech.
Real-time Analysis: The API delivers lightning-fast analysis, providing instant feedback on submitted text content.
Multi-dimensional Assessment: It evaluates text content across multiple dimensions, such as self-harm, sexual intent, violence, and hate speech, ensuring comprehensive safety assessment.
Sophisticated Algorithms: Powered by advanced machine learning algorithms, the API accurately detects subtle nuances and contextual cues to identify potential safety risks within the text.
Customizable Thresholds: Users have the flexibility to set customized thresholds for each safety category based on their specific safety requirements and sensitivity levels.
Scalability: Built to handle large volumes of requests, the API ensures seamless scalability to accommodate varying levels of demand.
Secure Integration: The API integration follows industry-standard security protocols, ensuring the confidentiality and integrity of user data throughout the analysis process.
No Data Storage: Wotcher Content Safety Platform API does not store any submitted data, ensuring user privacy and data security.
API Call: Users submit text content to the API via a simple API call.
Analysis: The API analyzes the submitted content across multiple safety dimensions using sophisticated algorithms.
Feedback: Instant feedback is provided to the user, indicating the presence or absence of safety concerns related to self-harm, sexual intent, violence, and hate speech.
Actionable Insights: Users can leverage the provided insights to take appropriate action, such as content moderation, user intervention, or further analysis.
Social Media Platforms: Ensure the safety of users by monitoring and moderating user-generated content for potential safety risks.
Online Communities: Maintain a safe and inclusive environment by proactively identifying and addressing harmful content.
E-learning Platforms: Safeguard learners from exposure to harmful material by screening educational content for safety concerns.
Messaging Apps: Protect users from harmful interactions by monitoring and filtering messages for safety violations.
Endpoint | Description |
---|---|
/text/analyze |
Endpoint for analyzing text content for safety concerns. |
/image/analyze |
Endpoint for analyzing image content for safety concerns (coming soon). |
URL: /text/analyze
Method: POST
Request Body:
Parameter | Type | Description |
---|---|---|
Content |
string | The text content to be analyzed. |
SeverityLevels |
object | (Optional) The severity levels for each category. Defaults to โNoneโ if not specified. |
Success Response:
Field | Type | Description |
---|---|---|
isSuccessful |
boolean | Indicates whether the request was successful. |
data |
object | The analysis results. |
errorMessage |
string | Error message if the request failed. |
statusCode |
number | HTTP status code. |
data
Object:
Field | Type | Description |
---|---|---|
contentAnalzed |
string | The text content that was analyzed. |
hateSpeech |
object | Details about hate speech detection. |
sexualContent |
object | Details about sexual content detection. |
violence |
object | Details about violence detection. |
selfHarm |
object | Details about self-harm detection. |
hateSpeech
, sexualContent
, violence
, selfHarm
Objects:
Field | Type | Description |
---|---|---|
detected |
boolean | Indicates if the content was detected. |
severity |
string | The severity level if content was detected. |
Error Response:
Field | Type | Description |
---|---|---|
isSuccessful |
boolean | Indicates whether the request was successful. |
data |
null | Empty in case of error. |
errorMessage |
string | Error message describing the issue. |
statusCode |
number | HTTP status code. |