Sign Up

Log In

Ontology-Based Topic Detection

FREEMIUM
By Proxem
Updated 9 months ago
Data
4/10
Popularity Score
46027ms
Latency
20%
Success Rate

Ontology-Based Topic Detection API Documentation

A text analysis service to find out what any text is about by extracting the most relevant Wikipedia’s categories through a patented NLP technology

View API Details

Supercharge your App

Discover and connect to thousands of APIs in the world's largest API Hub

POSTGet categories
POSTGet corpus categories
POSTGet categories

Returns the top themes associated to the given text

FreemiumThis API has a free, limited plan and paid plans. You can subscribe directly to it from RapidAPI.
Sign up to test this endpoint
Log in
Header Parameters
X-RapidAPI-KeySTRING
REQUIRED
Required Parameters
DocumentJSON_STRING
REQUIREDThe document to analyze
Optional Parameters
AcceptSTRING
OPTIONALThe expected type of the response
nbtopcatNUMBER
OPTIONALThe max numbers of expected categories (max 50)
cleanupBOOLEAN
OPTIONALTry to remove the less useful categories (default to true)
srclangSTRING
OPTIONALSet the language of the given document (prevent the auto-detection)
edgesBOOLEAN
OPTIONALSet to true to receive parent/child relations between categories
Request Snippet
unirest.post("https://proxem-thematization.p.rapidapi.com/api/wikiAnnotator/GetCategories?nbtopcat=undefined&cleanup=undefined&srclang=undefined&edges=undefined")
.header("X-RapidAPI-Key", "undefined")
.header("Accept", "undefined")
.header("Content-Type", "text/plain")
.send("At Proxem, our clients ask us to extract information from e-mails, social medias, press articles, and basically any type of text you can imagine. In the standard case, the text to process is written in various languages. To establish systems that support a wide scale of languages and formats is one of the mission of our Research team.Another goal of ours is to develop cross-lingual algorithms, that is algorithms which take as input texts in different languages and output an information computed on all those texts. For example on a task called sentiment analysis, which consists in detecting the \"polarity\" of a document (\"is this document rather positive or negative?\"), we want to implement a unique algorithm that would take as input sentences in English, Chinese, Spanish, etc and would compute a score. There are multiple reasons for us to aim at this. One is for simplicity sake. Indeed, we do not want to implement as many algorithms as languages we may have to handle. Another reason for that choice is that we want to leverage the important amount of available data for some languages to improve the accuracy on languages where data is rare.")
.end(function (result) {
  console.log(result.status, result.headers, result.body);
});
Sample Response

loading...

Log inSign up

Install SDK for NodeJS

Installing

To utilize unirest for node.js install the the npm module:

$ npm install unirest

After installing the npm package you can now start simplifying requests like so:

var unirest = require('unirest');

Creating Request

unirest.post("https://proxem-thematization.p.rapidapi.com/api/wikiAnnotator/GetCategories?nbtopcat=undefined&cleanup=undefined&srclang=undefined&edges=undefined")
.header("X-RapidAPI-Key", "undefined")
.header("Accept", "undefined")
.header("Content-Type", "text/plain")
.send("At Proxem, our clients ask us to extract information from e-mails, social medias, press articles, and basically any type of text you can imagine. In the standard case, the text to process is written in various languages. To establish systems that support a wide scale of languages and formats is one of the mission of our Research team.Another goal of ours is to develop cross-lingual algorithms, that is algorithms which take as input texts in different languages and output an information computed on all those texts. For example on a task called sentiment analysis, which consists in detecting the \"polarity\" of a document (\"is this document rather positive or negative?\"), we want to implement a unique algorithm that would take as input sentences in English, Chinese, Spanish, etc and would compute a score. There are multiple reasons for us to aim at this. One is for simplicity sake. Indeed, we do not want to implement as many algorithms as languages we may have to handle. Another reason for that choice is that we want to leverage the important amount of available data for some languages to improve the accuracy on languages where data is rare.")
.end(function (result) {
  console.log(result.status, result.headers, result.body);
});
OAuth2 Authentication
Client ID
Client Secret
OAuth2 Authentication
Sign up for freeto test this endpoint
Join the world’s largest API marketplace with over half a million developers and thousands of APIs.
DiscoverAPIs
Testfrom the browser
Connectusing code snippets
Managefrom one dashboard