Key Terms Extraction

FREEMIUM
By Proxem
Updated 4 months ago
Tools
4.2/10
Popularity Score
35289ms
Latency
40%
Success Rate

Key Terms Extraction API Documentation

The perfect tool for tag cloud visualization. Extracts key terms and phrases from a text document with our terminology extraction methods. This natural language processing technique computes ngrams and lets your find the most statistically relevant ones in your data. Thanks to language detection, stop words are automatically removed. Works well for indexing, suggestion, recommendation or as a prerequisite for other text analytics -related tasks. Available languages: en, fr, it, sp, de, pt.

View API Details
POSTKey Terms Extraction
POSTLanguage Detection
POSTKey Terms Extraction

Extract the most relevant terms from the given text. Default method is based on large scale machine learning and works well on short documents. Alternate method focuses on noun phrases and is suitable for corpus or large documents.

Header Parameters
X-RapidAPI-HostSTRING
REQUIRED
X-RapidAPI-KeySTRING
REQUIRED
AcceptSTRING
OPTIONAL
Required Parameters
Text
REQUIREDSome text (the more the merrier)
Optional Parameters
langSTRING
OPTIONALBypasses language detection by forcing the language. Available: iso-code like en, fr, it, de, pt, sp, nl.
methodNUMBER
OPTIONALChoose method. The default method (0) is based on machine learning over big corpus and handles even small texts. The method 1 is more linguistic and only uses the given text so it needs larger volume to be precise.
nbtoptermsNUMBER
OPTIONAL
Code Snippet
unirest.post("https://proxem-term-extraction-v1.p.rapidapi.com/api/TermExtraction/Extract?method=0&nbtopterms=20")
.header("X-RapidAPI-Host", "proxem-term-extraction-v1.p.rapidapi.com")
.header("X-RapidAPI-Key", "SIGN-UP-FOR-KEY")
.header("Accept", "applications/json")
.header("Content-Type", "text/plain")
.send("Conversational modeling is an important task innatural language understanding and machine intelligence.Although previous approaches exist, they are often restricted to specific domains(e.g., booking an airline ticket) and require handcrafted rules.In this paper, we present a simple approach for this task which uses the recentlyproposed sequence to sequence framework.Our model converses by predicting the next sentencegiven the previous sentence or sentences in a conversation.The strength of our model is thatit can be trained end-to-end and thus requiresmuch fewer hand-crafted rules. We find that thisstraightforward model can generate simple conversations given a large conversational trainingdataset. Our preliminary suggest that, despite optimizing the wrong objective function, the modelis able to extract knowledge from both a domainspecific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On adomain-specific IT helpdesk dataset, the modelcan find a solution to a technical problem viaconversations. On a noisy open-domain movietranscript dataset, the model can perform simpleforms of common sense reasoning. As expected,we also find that the lack of consistency is a common failure mode of our model.1. IntroductionAdvances in end-to-end training of neural networks haveled to remarkable progress in many domains such as speechrecognition, computer vision, and language processing.Recent work suggests that neural networks can do morethan just mere classification, they can be used to map complicated structures to other complicated structures. An example of this is the task of mapping a sequence to anothersequence which has direct applications in natural languageunderstanding. One of the majoradvantages of this framework is that it requires little feature engineering and domain specificity whilst matching or surpassing state-of-the-art results. This advance, in our opinion, allows researchers to work on tasks for which domainknowledge may not be readily available, or for tasks whichare simply too hard to model.Conversational modeling can directly benefit from this formulation because it requires mapping between queries andreponses. Due to the complexity of this mapping, conversational modeling has previously been designed to be verynarrow in domain, with a major undertaking on feature engineering. In this work, we experiment with the conversation modeling task by casting it to a task of predicting thenext sequence given the previous sequence or sequencesusing recurrent networks.We find that this approach can do surprisingly well on generatingfluent and accurate replies to conversations.We test the model on chat sessions from an IT helpdeskdataset of conversations, and find that the model can sometimes track the problem and provide a useful answer tothe user. We also experiment with conversations obtainedfrom a noisy dataset of movie subtitles, and find that themodel can hold a natural conversation and sometimes perform simple forms of common sense reasoning. In bothcases, the recurrent nets obtain better perplexity comparedto the n-gram model and capture important long-range correlations. From a qualitative point of view, our model issometimes able to produce natural conversations.2. Related WorkOur approach is based on recent work which proposed to use neural networks to map sequences to sequences. This framework has beenused for neural machine translation and achieves improvements on the English-French and English-Germantranslation tasks from the WMT’14 dataset. It has also been used forother tasks such as parsing andimage captioning . Since it iswell known that vanilla RNNs suffer from vanishing gradients, most researchers use variants of theLong Short Term Memory (LSTM) recurrent neural network .Our work is also inspired by the recent success of neural language modeling, which shows that recurrent neuralnetworks are rather effective models for natural language.More recently, work by Sordoni et al.and Shang et al. , used recurrent neuralnetworks to model dialogue in short conversations (trainedon Twitter-style chats).Building bots and conversational agents has been pursued by many researchers over the last decades, and itis out of the scope of this paper to provide an exhaustive list of references. However, most of these systemsrequire a rather complicated processing pipeline of manystages. Our work differs from conventional systems byproposing an end-to-end approach to the problem whichlacks domain knowledge. It could, in principle, be combined with other systems to re-score a short-list of candidate responses, but our work is based on producing answers given by a probabilistic model trained to maximizethe probability of the answer given some contextOur  approach  makes  use  of  the  sequence  to  sequence(seq2seq) model described in.  Themodel is based on a recurrent neural network which readsthe  input sequence one token  at  a  time,  and  predicts theoutput sequence, also one token at a time. During training,the true output sequence is given to the model, so learningcan be done by backpropagation.  The model is trained tomaximize the cross entropy of the correct sequence givenits context. During inference, given that the true output sequence is not observed, we simply feed the predicted outputtoken as input to predict the next output. This is a “greedy”inference approach.  A less greedy approach would be touse beam search, and feed several candidates at the previous step to the next step.  The predicted sequence can beselected based on the probability of the sequence.")
.end(function (result) {
  console.log(result.status, result.headers, result.body);
});
Sample Response
General
Request URL: https://proxem-term-extraction-v1.p.rapidapi.com/api/TermExtraction/Extract
Request Method: POST
Response Headers
Response Body

Install SDK for NodeJS

Installing

To utilize unirest for node.js install the the npm module:

$ npm install unirest

After installing the npm package you can now start simplifying requests like so:

var unirest = require('unirest');

Creating Request

unirest.post("https://proxem-term-extraction-v1.p.rapidapi.com/api/TermExtraction/Extract?method=0&nbtopterms=20")
.header("X-RapidAPI-Host", "proxem-term-extraction-v1.p.rapidapi.com")
.header("X-RapidAPI-Key", "SIGN-UP-FOR-KEY")
.header("Accept", "applications/json")
.header("Content-Type", "text/plain")
.send("Conversational modeling is an important task innatural language understanding and machine intelligence.Although previous approaches exist, they are often restricted to specific domains(e.g., booking an airline ticket) and require handcrafted rules.In this paper, we present a simple approach for this task which uses the recentlyproposed sequence to sequence framework.Our model converses by predicting the next sentencegiven the previous sentence or sentences in a conversation.The strength of our model is thatit can be trained end-to-end and thus requiresmuch fewer hand-crafted rules. We find that thisstraightforward model can generate simple conversations given a large conversational trainingdataset. Our preliminary suggest that, despite optimizing the wrong objective function, the modelis able to extract knowledge from both a domainspecific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On adomain-specific IT helpdesk dataset, the modelcan find a solution to a technical problem viaconversations. On a noisy open-domain movietranscript dataset, the model can perform simpleforms of common sense reasoning. As expected,we also find that the lack of consistency is a common failure mode of our model.1. IntroductionAdvances in end-to-end training of neural networks haveled to remarkable progress in many domains such as speechrecognition, computer vision, and language processing.Recent work suggests that neural networks can do morethan just mere classification, they can be used to map complicated structures to other complicated structures. An example of this is the task of mapping a sequence to anothersequence which has direct applications in natural languageunderstanding. One of the majoradvantages of this framework is that it requires little feature engineering and domain specificity whilst matching or surpassing state-of-the-art results. This advance, in our opinion, allows researchers to work on tasks for which domainknowledge may not be readily available, or for tasks whichare simply too hard to model.Conversational modeling can directly benefit from this formulation because it requires mapping between queries andreponses. Due to the complexity of this mapping, conversational modeling has previously been designed to be verynarrow in domain, with a major undertaking on feature engineering. In this work, we experiment with the conversation modeling task by casting it to a task of predicting thenext sequence given the previous sequence or sequencesusing recurrent networks.We find that this approach can do surprisingly well on generatingfluent and accurate replies to conversations.We test the model on chat sessions from an IT helpdeskdataset of conversations, and find that the model can sometimes track the problem and provide a useful answer tothe user. We also experiment with conversations obtainedfrom a noisy dataset of movie subtitles, and find that themodel can hold a natural conversation and sometimes perform simple forms of common sense reasoning. In bothcases, the recurrent nets obtain better perplexity comparedto the n-gram model and capture important long-range correlations. From a qualitative point of view, our model issometimes able to produce natural conversations.2. Related WorkOur approach is based on recent work which proposed to use neural networks to map sequences to sequences. This framework has beenused for neural machine translation and achieves improvements on the English-French and English-Germantranslation tasks from the WMT’14 dataset. It has also been used forother tasks such as parsing andimage captioning . Since it iswell known that vanilla RNNs suffer from vanishing gradients, most researchers use variants of theLong Short Term Memory (LSTM) recurrent neural network .Our work is also inspired by the recent success of neural language modeling, which shows that recurrent neuralnetworks are rather effective models for natural language.More recently, work by Sordoni et al.and Shang et al. , used recurrent neuralnetworks to model dialogue in short conversations (trainedon Twitter-style chats).Building bots and conversational agents has been pursued by many researchers over the last decades, and itis out of the scope of this paper to provide an exhaustive list of references. However, most of these systemsrequire a rather complicated processing pipeline of manystages. Our work differs from conventional systems byproposing an end-to-end approach to the problem whichlacks domain knowledge. It could, in principle, be combined with other systems to re-score a short-list of candidate responses, but our work is based on producing answers given by a probabilistic model trained to maximizethe probability of the answer given some contextOur  approach  makes  use  of  the  sequence  to  sequence(seq2seq) model described in.  Themodel is based on a recurrent neural network which readsthe  input sequence one token  at  a  time,  and  predicts theoutput sequence, also one token at a time. During training,the true output sequence is given to the model, so learningcan be done by backpropagation.  The model is trained tomaximize the cross entropy of the correct sequence givenits context. During inference, given that the true output sequence is not observed, we simply feed the predicted outputtoken as input to predict the next output. This is a “greedy”inference approach.  A less greedy approach would be touse beam search, and feed several candidates at the previous step to the next step.  The predicted sequence can beselected based on the probability of the sequence.")
.end(function (result) {
  console.log(result.status, result.headers, result.body);
});
OAuth2 Authentication
Client ID
Client Secret
OAuth2 Authentication