Style and grammar checking / proofreading for more than 25 languages, including English, French, Polish, Spanish and German. Based on languagetool.org.
Sentiment analysis, stemming and lemmatization, part-of-speech tagging and chunking, phrase extraction and named entity recognition.
Topics Extraction tags locations, people, companies, dates and many other elements appearing in a text written in Spanish, English, French, Italian, Portuguese or Catalan. This detection process is carried out by combining a number of complex natural language processing techniques that allow to obtain morphological, syntactic and semantic analyses of a text and use them to identify different types of significant elements.
This is a free service that will get you the ip geographical location information of your user.
DuckDuckGo Zero-click Info includes topic summaries, categories, disambiguation, official sites, !bang redirects, definitions and more. You can use this API for many things, e.g. define people, places, things, words and concepts; provides direct links to other services (via !bang syntax); list related topics; and gives official sites when available.
Multilingual sentiment analysis of texts from different sources (blogs, social networks,...). Besides polarity at sentence and global level, Sentiment Analysis uses advanced natural language processing techniques to also detect the polarity associated to both entities and concepts in the text. Sentiment Analysis also gives the user the possibility of detecting the polarity of user-defined entities and concepts, making the service a flexible tool applicable to any kind of scenario. Additionally, Sentiment Analysis detects if the text processed is subjective or objective and if it contains irony marks [beta], both at global and sentence level, giving the user additional information about the reliability of the polarity obtained from the sentiment analysis.
Diffbot extracts data from web pages automatically and returns structured JSON. For example, our Article API returns an article's title, author, date and full-text. Use the web as your database! We use computer vision, machine learning and natural language processing to add structure to just about any web page.
Automatic language detection for texts obtained from any kind of source (blog, twitter, online news and so on). Through statistic techniques based on N-grams evaluation, more than 60 languages are correctly identified.
This API provides text analysis for Tone, Sentiment, Summarization, Personality Analysis, and more. This API can be used for: Part of Speech Tagging Named Entity Recognition Sentence Disambiguation KeyWord Extraction Summarization and Sentence Significance Sentiment Analysis Alliteration Detection Word Sense Disambiguation Clustering Logistic Regression Scoring Prominence Tagging for Latent Semantic Indexing Tagging for Singular Value Decomposition Phonetic Decomposition Reading Difficulty Modeling Technical Difficulty Modeling Spelling Correction String Comparison and Plagiarism Detection Author Profiling Psychographic Modeling Fact and Statistic Extraction Ism Extraction Character Language Modeling It is also useful in the creation of ChatBots, SearchEngines, and KnolExtraction for Automated Documentation.
Automatic multilingual text classification according to pre-established categories defined in a model. The algorithm used combines statistic classification with rule-based filtering, which allows to obtain a high degree of precision for very different environments. Three models available: IPTC (International Press Telecommunications Council standard), EuroVocs and Corporate Reputation model. Languages covered are Spanish, English, French, Italian, Portuguese and Catalan.
The WebKnox text processing API lets you process (natural) language texts. You can detect the text's language, the quality of the writing, find entity mentions, tag part-of-speech, extract dates, extract locations, or determine the sentiment of the text.
Text processing framework to analyse Natural Language. It is especially focused on text classification and sentiment analysis of online news media (general-purpose, multiple topics).
You can perform high quality natural language processing (NLP) including: 1. Sentiment recognition 2. Summary of articles 3. Intent of social media comments 4. Top topics/key words
Ambiverse Natural Language Understanding API extracts entities from unstructured text, enabling a more precise transformation of texts into actionable, measurable, and easily accessible knowledge. Entities are identified by types such as person, location, organization, or product, and linked to the Wikipedia-derived YAGO knowledge graph. You can query the knowledge graph to obtain further information about these entities, such as Wikipedia links, textual descriptions, images, and lists of relevant categories.
Linguakit API (formerly CilenisAPI) helps you to analyze and extract information from texts. Add language technology to your software in a few minutes using our cloud solution. We offer you technology based on years of research in Natural Language Processing in a very easy and scalable SaaS model trough a RESTful API.
This API takes a paragraph and returns the text with each word stemmed using porter stemmer, snowball stemmer or UEA stemmer
The WebKnox question-answering API allows you to find answers to natural language questions. These questions can be factual such as "What is the capital of Australia" or more complex.
A natural language processing API is a machine learning tool that is pre-trained to do things like evaluating the tone of a given text. Other capabilities of natural language APIs include syntax analysis, entity analysis, and content classification.
An NLP API deciphers the meaning and structure of a text. Using pre-trained natural language models, developers can apply this type of comprehension to the applications they’re working on to extract information and/or better understand customer sentiment and the conversations occurring online about their product or service.
Natural language processing is a key field that lies at the intersection of machine learning, computer science, and linguistics. NLP can trace its origins to the 1950s. At that time, although artificial intelligence was still a nascent subject of study, the famous Turing test and other experiments involving automatic translation gained traction in the scientific and academic communities.
APIs for natural language processing work via state-of-the-art statistical machine learning. Most require just a few lines of code to run. After training the model, you can apply it to any document or web page and let the program do the heavy lifting.
Developers who want to perform machine translation, language research, news analysis, part-of-speech tagging, noun phrase extraction, document indexing, topic modeling, stemming and lemmatization, sentiment analysis, named entity recognition, or classification can benefit from these APIs.
Natural language processing allows developers to analyze and classify text-based entries at a speed and with a degree of accuracy that humans could never hope to achieve on their own. APIs are integral to the integration of this software into existing systems. Once integrated, the NLP analyses help with everything from understanding customers’ opinions and generating UX insights to cross-comparing invoices to discover the relationship between requests and proofs of payment.
An NLP API will help you save time and money by performing indexing and other related tasks both quickly and efficiently.
Open-source NLP APIs are not only free but can also easily be customized. Examples of these open-source APIs for NLP available in Python–the preferred machine learning programming language–include Natural Language Toolkit, SpaCy, Stanford CoreNLP, Gensim, and TextBlob.
All NLP APIs are supported and made available in multiple developer programming languages and SDKs including:
Just select your preference from any API endpoints page.
Sign up today for free on RapidAPI to begin using NLP APIs!