TheySay PreCeive API Documentation

Introduction

TheySay PreCeive API is a platform-agnostic service which enables developers to access and mix-and-match our powerful text analysis processors that cover sentiment analysis, speculation detection, part-of-speech tagging, dependency parsing, and others. If you're building an application that cannot do without serious, state-of-the-art text analytics but don't want to delve deep into natural language processing, then this is the API for you.

Getting started with PreCeive API is easy. Test-drive our live public API demo, explore its end points below, and contact us to receive a development key. Need help or want to give feedback? Contact us - we'd love to hear from you!

API Clients

To help you get started, Open Source API clients are currently available for Java (1), Java (2), Scala, Python, Ruby, Node.js, R, and PHP. [PreCeive-Batch](https://github.com/theysay/preceive-batch) is a Java based tool that is suitable for batch processing and evaluations.

Authentication

TheySay PreCeive API uses HTTP Basic authentication over HTTPS.

User Levels

While most end points are available to all users, some can be accessed only by premium users (denoted as ☆☆☆PREMIUM USER☆☆☆ in the documentation).

Multilingual Support

The API expects you to submit English text to it. Beyond English, multilingual support is provided in the form of specific language-analysis pairs. Multilingual support currently covers German and Spanish sentiment analysis.

HTTP Methods

PreCeive API follows REST principles. The following HTTP request methods are supported for analysis requests:

  • POST(recommended)
    • Query fields are in the request body and expressed as JSON.
    • Example payload: { "text":"Patchy rain, sleet or snow in parts...", "level":"sentence" }
  • GET
    • Query fields are expressed as parameters in the URL and must be URL-encoded. Note that GET exposes only a limited subset of available query fields.
    • Example URL: /v1/sentiment?text=how%20cool%20is%20that!&level=sentence

HTTP Response Codes

  • 200 OK - The request was successful.
  • 201 Created - The request was successful and a resource was created.
  • 400 Bad Request - The request could not be interpreted correctly or some required parameters were missing.
  • 401 Unauthorized - Authentication failed - double-check your username and/or password.
  • 405 Method Not Allowed - The requested method is not supported. Only GET and POST are allowed.
  • 429 Too Many Requests - Quota or rate limit exceeded (see below).
  • 500 Internal Server Error - Something is broken. Please contact us and we'll investigate.

Quotas and Rate Limits

We enforce two request quotas: requests per day and requests per minute. Your quotas depend on your API subscription. By default, the following rates apply:

  • Maximum 500 requests per day, reset at midnight UTC.
  • Maximum 30 requests per minute.

Responses returned by the API contain information about your quota in the following response header fields:

  • X-RequestLimit-Limit - # of requests that you can send in a day. Example: 15000.
  • X-RequestLimit-Remaining - # of requests that you can send before you will exceed your daily request limit. Example: 12323.
  • X-RequestLimit-Reset - When your next daily quota will be reset (in UTC [epoch milliseconds](http://en.wikipedia.org/wiki/Unix_time)). Example: 1360281599708.
  • X-RateLimit-IntervalSecs - The length of your rate limit window. Example: 60.
  • X-RateLimit-Limit - # of requests that can send within your rate limit window. Example: 30.
  • X-RateLimit-Remaining - # of requests that you can send before you will exceed your rate limit. Example: 25.
  • X-RateLimit-Reset - When your next rate limit window will be reset (in UTC [epoch milliseconds](http://en.wikipedia.org/wiki/Unix_time)). Example: 1360254866709.

You can also see your current rate limit status by calling /rate_limit. Example: https://api.theysay.io/rate_limit.

For more information about quotas, rate limits, and subscriptions, contact us.

Maximum Request Length

The maximum length of the text body in each request is 20000 characters.

JSONP Support

Use the callback request parameter to add a [JSONP](http://en.wikipedia.org/wiki/JSONP) wrapper. The returned Content-Type will be application/javascript.

GZIP Compression

Add Accept-Encoding: gzip to your request headers if you want the API to deliver a gzipped stream.

Server Version

To view build and version information about the current API, call /version. Example: https://api.theysay.io/version


Version 1 API

Sentiment: English

/v1/sentiment

Sentiment, a dimension of non-factuality in language that is closely related to subjectivity/affect/emotion/moods/feelings, reflects psychological evaluation with the following fundamental poles:

  • positive (~ good / pros / favourable / desirable / recommended / thumbs up /...) vs
  • negative (~ bad / cons / unfavourable / undesirable / not recommended / thumbs down /...)

You can use the Sentiment Analysis service to discover and score deep, fine-grained sentiments and opinions in text. The analysis, output by a human-like sentiment reasoning algorithm, captures both explicit "author sentiment" as well as general, implicit "reader-sentiment" beyond opinions that ultimately stems from affective commons sense as well as issues and events that are generally considered to be good vs. bad in the world.

The returned analysis includes majority sentiment labels, fine-grained 3-way positive/neutral/negative percentage scores, and other useful auxiliary fields.

Returns sentiment information about the entire text (document-level sentiment analysis).

POST  /v1/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
{ "sentiment": { "label": "POSITIVE", "positive": 0.941, "negative": 0.0, "neutral": 0.059 }, "wordCount": 12 }
Returns sentiment information about each sentence in the text (sentence-level sentiment analysis).

POST  /v1/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
[{ "sentiment": { "label": "POSITIVE", "positive": 0.787, "negative": 0.16, "neutral": 0.053, "confidence": 0.668 }, "start": 0, "end": 36, "sentenceIndex": 0, "text": "The new French President Francois Hollande wants a '' growth pact '' in Europe - a set of reforms designed to boost European economies and mitigate the pain caused by government spending cuts across the continent ." }, { "sentiment": { "label": "NEGATIVE", "positive": 0.347, "negative": 0.627, "neutral": 0.026, "confidence": 0.614 }, "start": 37, "end": 68, "sentenceIndex": 1, "text": "All the bad loans made by eurozone banks may need to be cleaned up ( by injecting money into the banks ) because many national governments probably can not afford it ." }]
Returns sentiment information about each individual entity (term, keyword) mentioned in the text (entity-level sentiment analysis).

POST  /v1/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelentitystring, requiredEnables entity-level sentiment analysis.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
targetsmarket|business|opportunity|coststring, optionalMatch expression for target entities.
matchingheadstring, optionalMatching mode.
Response 200
[{ "sentiment": { "label": "POSITIVE", "positive": 1.0, "negative": 0.0, "neutral": 0.0, "confidence": 0.756 }, "start": 2, "end": 2, "sentence": "'' This collaboration is sending a strong message to all the spammers : Stop sending us spam .", "sentenceHtml": "'' This <span class=\"entityMention\">collaboration</span> is sending a strong message to all the spammers : Stop sending us spam .", "text": "collaboration", "headNoun": "collaboration", "headNounIndex": 2, "salience": 1.0 }, { "sentiment": { "label": "NEGATIVE", "positive": 0.412, "negative": 0.588, "neutral": 0, "confidence": 0.689 }, "start": 11, "end": 11, "sentence": "'' This collaboration is sending a strong message to all the spammers : Stop sending us spam .", "sentenceHtml": "'' This collaboration is sending a strong message to all the <span class=\"entityMention\">spammers</span> : Stop sending us spam .", "text": "spammers", "headNoun": "spammers", "headNounIndex": 11, "salience": 0.7 }]
Returns sentiment information about aggregated entities (terms, keywords) mentioned in the text. Individual entity mentions are grouped using lowercase head noun matching and scored using weighted sentiment scores.

POST  /v1/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelentityaggregatestring, requiredEnables sentiment analysis for entity aggregates.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
[{ "entity": "osborne", "frequency": 2, "sentiment": { "label": "NEGATIVE", "positive": 0.0, "negative": 0.96, "neutral": 0.04, "confidence": 0.801 }, "salience": 1.0, "mentions": [{ "sentiment": { "label": "NEGATIVE", "positive": 0.0, "negative": 0.851, "neutral": 0.149, "confidence": 0.775 }, "start": 0, "end": 1, "sentence": "Mr Osborne said the banking system was not working for its customers .", "sentenceHtml": " <span class=\"entityMention\">Mr Osborne</span> said the banking system was not working for its customers .", "text": "Mr Osborne", "headNoun": "Osborne", "headNounIndex": 1, "salience": 1.0 }, { "sentiment": { "label": "NEGATIVE", "positive": 0.0, "negative": 0.861, "neutral": 0.139, "confidence": 0.827 }, "start": 13, "end": 13, "sentence": "Osborne also said that banks had failed to take responsibility for their actions .", "sentenceHtml": " <span class=\"entityMention\">Osborne</span> also said that banks had failed to take responsibility for their actions .", "text": "Osborne", "headNoun": "Osborne", "headNounIndex": 13, "salience": 1.0 }] }]
Returns sentiment information about detailed relations between entities (terms, keywords) mentioned in the text.

POST  /v1/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelentityrelationstring, requiredEnables relational entity-level sentiment analysis.
Response 200
[{ "entity1": { "head": "Avanesov", "headIndex": 2, "text": "Russian Georgiy Avanesov" }, "entity2": { "head": "botnet", "headIndex": 17, "text": "Bredolab botnet" }, "sentiment": { "label": "NEGATIVE", "positive": 0.209, "negative": 0.523, "neutral": 0.268 }, "salience": 0.243, "sentence": "Russian Georgiy Avanesov was in May sentenced to four years in jail for being behind the Bredolab botnet which was believed to have been generating more than # 80,000 a month in revenue .", "sentenceHtml": " <span class=\"entity1\">Russian Georgiy Avanesov</span> was in May sentenced to four years in jail for being behind the <span class=\"entity2\">Bredolab botnet</span> which was believed to have been generating more than # 80,000 a month in revenue ." }, { "entity1": { "head": "Avanesov", "headIndex": 2, "text": "Russian Georgiy Avanesov" }, "entity2": { "head": "revenue", "headIndex": 32, "text": "revenue" }, "sentiment": { "label": "POSITIVE", "positive": 0.377, "negative": 0.314, "neutral": 0.309 }, "salience": 0.155, "sentence": "Russian Georgiy Avanesov was in May sentenced to four years in jail for being behind the Bredolab botnet which was believed to have been generating more than # 80,000 a month in revenue .", "sentenceHtml": " <span class=\"entity1\">Russian Georgiy Avanesov</span> was in May sentenced to four years in jail for being behind the Bredolab botnet which was believed to have been generating more than # 80,000 a month in <span class=\"entity2\">revenue</span> ." }]
Returns information about the flow of sentiment through the text (document-level sentiment timeline analysis). The analysis covers contextual sentence-level sentiment labels and positional co-ordinates for individual words in the text which you can use to plot the temporal development (or flow) of sentiment through the text.

POST  /v1/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelwordstring, requiredEnables document-level sentiment flow analysis.
Response 200
[{ "sentiment": { "label": "NEGATIVE", "timelineY": -1.0 }, "wordIndex": 0, "text": "There" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.004 }, "wordIndex": 1, "text": "have" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.008 }, "wordIndex": 2, "text": "been" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.012 }, "wordIndex": 3, "text": "clashes" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.0170000000000001 }, "wordIndex": 4, "text": "throughout" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.0210000000000001 }, "wordIndex": 5, "text": "the" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.025 }, "wordIndex": 6, "text": "night" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.029 }, "wordIndex": 7, "text": "in" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.033 }, "wordIndex": 8, "text": "many" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.037 }, "wordIndex": 9, "text": "parts" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.042 }, "wordIndex": 10, "text": "of" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.046 }, "wordIndex": 11, "text": "Syria" }, { "sentiment": { "label": "NEUTRAL", "timelineY": -1.046 }, "wordIndex": 12, "text": "." }]

Sentiment: Multilingual

/v1/multilingual/sentiment

Beyond English, the API offers sentiment analysis for German (de) and Spanish (es) at the document and the sentence levels.

Returns sentiment information about the entire text (document-level sentiment analysis).

POST  /v1/multilingual/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
languagedestring, requiredThe ISO 639-1 natural language code for the input text. See http://www.loc.gov/standards/iso639-2/php/code_list.php for the codes.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
{ "sentiment": { "label": "POSITIVE", "positive": 0.941, "negative": 0.0, "neutral": 0.059 }, "wordCount": 12 }
Returns sentiment information about each sentence in the text (sentence-level sentiment analysis).

POST  /v1/multilingual/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
languagedestring, requiredThe ISO 639-1 natural language code for the input text. See http://www.loc.gov/standards/iso639-2/php/code_list.php for the codes.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
[{ "sentiment": { "label": "NEGATIVE", "positive": 0.147, "negative": 0.778, "neutral": 0.076 }, "start": 0, "end": 14, "sentenceIndex": 0, "text": "Rund 70 Flüchtlinge haben in Heidenau gegen die schlechten Bedingungen in ihrer Unterkunft protestiert ." },{ "sentiment": { "label": "NEGATIVE", "positive": 0.256, "negative": 0.573, "neutral": 0.17 }, "start": 15, "end": 22, "sentenceIndex": 1, "text": "USA bereiten sich auf 10.000 syrische Flüchtlinge vor" }]

Emotions: Unbounded

/v1/emotion

Beyond positive vs. negative sentiment polarity, a vast range of psychological dimensions exist in the realm of emotions/moods/feelings/affect. You can use the Emotion Analysis service to project the text onto a fine-grained, multi-dimensional emotion space which is more natural than a singular majority label. By default, all emotion scores are unbounded (unnormalised, unscaled). The returned analysis lists emotion dimension labels, each with a confidence value from the prediction, and covers the following basic emotion dimensions:

  • anger1D - 1-dimensional anger scale (> 0).
  • fear1D - 1-dimensional fear scale (> 0).
  • shame1D - 1-dimensional shame scale (> 0).
  • surprise1D - 1-dimensional surprise scale (> 0).
  • calm2D - 2-dimensional scale between calmness (> 0) vs. agitation (< 0).
  • happy2D - 2-dimensional scale between happiness (> 0) vs. sadness (< 0).
  • like2D - 2-dimensional scale between liking (> 0) vs. disliking/disgust (< 0).
  • sure2D - 2-dimensional scale between certainty/sureness (> 0) vs. uncertainty/unsureness (< 0).
Returns unbounded (unnormalised, unscaled) emotion dimensions for the entire input text (document-level emotion analysis).

POST  /v1/emotion

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "emotions": [ { "dimension": "anger1D", "score": 1.667, "confidence": 0.217 }, { "dimension": "calm2D", "score": -0.478, "confidence": 0.032 }, { "dimension": "fear1D", "score": 0 }, { "dimension": "happy2D", "score": 0, "confidence": 0 }, { "dimension": "like2D", "score": -1.4, "confidence": 0.099 }, { "dimension": "shame1D", "score": 0, "confidence": 0 }, { "dimension": "sure2D", "score": -0.667, "confidence": 0.095 }, { "dimension": "surprise1D", "score": 0, "confidence": 0 } ]}
Returns unbounded (unnormalised, unscaled) emotion dimensions for each sentence in the input text (sentence-level emotion analysis).

POST  /v1/emotion

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
Response 200
[{ "emotions": [ { "dimension": "anger1D", "score": 5, "confidence": 0.217 }, { "dimension": "calm2D", "score": -3.9 }, { "dimension": "fear1D", "score": 0, "confidence": 0 }, { "dimension": "happy2D", "score": 0, "confidence": 0 }, { "dimension": "like2D", "score": -2.533, "confidence": 0.43 }, { "dimension": "shame1D", "score": 0, "confidence": 0 }, { "dimension": "sure2D", "score": 0, "confidence": 0 }, { "dimension": "surprise1D", "score": 0, "confidence": 0 } ], "start": 11, "end": 23, "sentenceIndex": 1, "text": "I have been called vile , villainous and evil for criticising her ." }, { "emotions": [ { "dimension": "anger1D", "score": 1.071, "confidence": 0.203 }, { "dimension": "calm2D", "score": -0.943, "confidence": 0.156 }, { "dimension": "fear1D", "score": 0.714, "confidence": 0.38 }, { "dimension": "happy2D", "score": -1.175, "confidence": 0.117 }, { "dimension": "like2D", "score": -0.536, "confidence": 0.223 }, { "dimension": "shame1D", "score": 0, "confidence": 0 }, { "dimension": "sure2D", "score": -0.286, "confidence": 0.09 }, { "dimension": "surprise1D", "score": 0.286, "confidence": 0.189 } ], "start": 14, "end": 24, "sentenceIndex": 1, "text": "I wonder how many times she cried and considered suicide ." }]

Emotions: Banded

/v1/emotion/bands

This alternative emotion analysis end point returns all emotion scores in a banded (normalised, discretised) form. The normalised bands are accompanied by scores (0, 5). The returned analysis lists emotion dimensions, each with one of the following labels and scores:

  • ABSENT - Score: 1. Range: (0, 0). Indicates that no emotion signal was detected.
  • WEAK - Score: 1. Range: (0.0, 0.2). Indicates extremely weak emotion signals.
  • FAIR - Score: 2. Range: (0.2, 0.4). Indicates weak emotion signals.
  • MODERATE - Score: 3. Range: (0.4, 0.6). Indicates fair (neither weak nor strong) emotion signals.
  • CONSIDERABLE - Score: 4. Range: (0.6, 0.8). Indicates strong emotion signals.
  • STRONG - Score: 5. Range: (0.8, 1.0). Indicates extremely strong emotion signals.
Returns banded banded (normalised, discretised) emotion dimensions for the entire input text (document-level emotion analysis).

POST  /v1/emotion/bands

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "emotions": [ { "dimension": "anger1D", "score": 3 }, { "dimension": "calm1D", "score": 3 }, { "dimension": "dislike1D", "score": 0 }, { "dimension": "fear1D", "score": 0 }, { "dimension": "happy1D", "score": 4 }, { "dimension": "like1D", "score": 2 }, { "dimension": "sad1D", "score": 2 }, { "dimension": "shame1D", "score": 0 }, { "dimension": "sure1D", "score": 0 }, { "dimension": "surprise1D", "score": 0 }, { "dimension": "uncalm1D", "score": 3 }, { "dimension": "unsure1D", "score": 2 } ]}
Returns banded (normalised, discretised) emotion dimensions for each sentence in the input text (sentence-level emotion analysis).

POST  /v1/emotion/bands

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
Response 200
[{ "emotions": [ { "dimension": "anger1D", "score": 5 }, { "dimension": "calm1D", "score": 0 }, { "dimension": "dislike1D", "score": 5 }, { "dimension": "fear1D", "score": 3 }, { "dimension": "happy1D", "score": 0 }, { "dimension": "like1D", "score": 0 }, { "dimension": "sad1D", "score": 0 }, { "dimension": "shame1D", "score": 0 }, { "dimension": "sure1D", "score": 0 }, { "dimension": "surprise1D", "score": 0 }, { "dimension": "uncalm1D", "score": 5 }, { "dimension": "unsure1D", "score": 0 } ], "start": 11, "end": 23, "sentenceIndex": 1, "text": "I have been called vile , villainous and evil for criticising her ." }, { "emotions": [ { "dimension": "anger1D", "score": 3 }, { "dimension": "calm1D", "score": 1 }, { "dimension": "dislike1D", "score": 3 }, { "dimension": "fear1D", "score": 3 }, { "dimension": "happy1D", "score": 1 }, { "dimension": "like1D", "score": 1 }, { "dimension": "sad1D", "score": 4 }, { "dimension": "shame1D", "score": 0 }, { "dimension": "sure1D", "score": 0 }, { "dimension": "surprise1D", "score": 1 }, { "dimension": "uncalm1D", "score": 4 }, { "dimension": "unsure1D", "score": 1 } ], "start": 14, "end": 24, "sentenceIndex": 1, "text": "I wonder how many times she cried and considered suicide ." }]

Topics

/v1/topic

This end point generates for a piece of text a fine-grained topic profile that comprises 40 generic, general-purpose topics and subject headings. With some topics, more specific subtopics are denoted with the dot operator (.) (e.g. FINANCE.FOREX). The returned topic distribution includes topics whose confidence levels were at least 0.5, and covers the following topic labels:

Topic LabelDescription
ACCIDENTSAll accidents
ACCIDENTS.AVIATIONPlane accidents
ACCIDENTS.TRAFFICTraffic accidents
ARTS_CULTUREarts, culture, cultural events, artists
BOOKS_LITERATUREbooks, literature, authors, bestsellers
BUSINESSbusiness
BUSINESS.EARNINGSBusiness earnings and results
BUSINESS.IPOInitial public offerings
BUSINESS.MERGERS_AND_ACQUISITIONScorporate mergers and acquisitions
CELEBRITIEScelebrities, celebrity culture, reality shows, talent shows
COMPUTINGcomputing, computers, IT, software, hardware, networks, IoT, operating systems, programming
COMPUTING.AIArtifical Intelligence
COMPUTING.BLOCKCHAINBlockchain computing
COMPUTING.CLOUDCloud computing
COMPUTING.IOTInternet of things
COMPUTING.SECURITYCybersecurity and hacking
CRIMEcrime, policing
DEFENCE_MILITARYdefence, military, army, war
EDUCATIONeducation, schools, universities
ELECTRONICSelectronics, consumer electronics, gadgets
EMERGENCIES_DISASTERSemergencies, catastrophes, natural disasters, man-made disasters, epidemics
EMPLOYMENT_WORKemployment, work, careers
ENERGYGeneral energy topics
ENERGY.NUCLEARNuclear energy
ENERGY.OILGASFossiil fuel, oil and gas topics
ENERGY.RENEWABLEGreen energy amd renewables
ENTERTAINMENTentertainment, showbiz, cinema, TV
ENVIRONMENTenvironment, environmental issues, environmentalism
FASHION_STYLEfashion, fashion designers, fashion brands, style
FINANCEfinance, investment, accounting
FINANCE.ALTCURRENCYAlternative currencies and cryptocurrencies
FINANCE.FOREXforeign exchange, currencies
FINANCE.MOVEMENTMovement in investments
FINANCE.RATINGRating upgrades and downgrades
FOOD_COOKINGfood, drink, cooking, cuisine
GAMINGgaming, video games, PC games, gaming platforms
HEALTHCARE_MEDICINEhealth, healthcare, medicine medical conditions, diseases
HEALTH_FITNESShealth, fitness, well-being, physical exercise
IMMIGRATIONimmigration, immigrants, refugees
INTELLECTUAL_PROPERTYntellectual property, copyright, patents, trademarks
LAWlaw, legal issues, litigation
MOTORINGmotoring, cars, motorcycles
PERSONAL_FINANCEPersonal finance
PHARMACEUTICALSPharmaceutical Industry issues
POLITICSpolitics, politicians, elections, governments
REAL_ESTATE_PROPERTYreal estate, property, housing
RELIGIONreligion, religious issues
RELIGION.CHRISTIANITY
RELIGION.ISLAM
RELIGION.JUDAISM
RETAILRetail business topics
SCIENCESciences
SCIENCE.BIOTECHBiotechnology
SCIENCE.NANOTECHNanotechnology
SOCIAL_MEDIASocial Media
SPORTSSports
TECHNOLOGYGeneral technology
TERRORISMterrorism
TRANSPORTATIONgeneral transportation, traffic, logistics
TRANSPORTATION.AIRAirlines and airfreight
TRANSPORTATION.MARITIMEShipping
TRANSPORTATION.RAILWAYRailways
TRANSPORTATION.ROADTraffic and road logistics
WEATHERWeather
Returns a topic profile for the entire input text (document-level topic classification).

POST  /v1/topic

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "scores": [ { "label": "FINANCE", "confidence": 0.782 }, { "label": "CURRENCY_MONEY", "confidence": 0.725 } ] }

Consumer Vulnerability

/v1/vulnerability

Consumer Vulnerability is a key concern for a number of industries, especially personal finance services. This end point detects potential signals that the author is in a vulnerable situation. Examples of such signals are engagement with healthcare services or illness, bereavement and relationship break-down.

Returns an array of signals detected within the text. This array will be empty if no signals identified. Currently only signal returned is 'VULNERABILITY'.

POST  /v1/vulnerability

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "vulnerability": [ "VULNERABILITY" ] }
Returns an array of objects, one for each sentence. Each sentence object will contain an array of any vulnerability signals detected within the text of that sentence. If no signal was detected within a sentence then this array will be empty.

POST  /v1/vulnerability

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
Response 200
[ { "vulnerability":[ "VULNERABILITY" ], "start":0, "end":5, "sentenceIndex":0, "text":"My father recently passed away ." }, { "vulnerability":[ ], "start":6, "end":11, "sentenceIndex":1, "text":"He was a good man. ." } ]

Speculation

/v1/speculation

Speculative language describes or refers directly or indirectly to irrealis events that are yet to happen. Speculative expressions can hence cover concepts as diverse as future, certainty, doubt, prediction, wanting, wishes, and waiting, to name a few. This service detects speculative expressions at the sentence level. The response contains only 'positive' matches: if no speculative content is detected, the response is [], accordingly. Any identified subtypes of speculation are denoted with the dot operator (.) (e.g. SPECULATION.SUBTYPE).

POST  /v1/speculation

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 8, "sentenceIndex": 0, "speculationType": "SPECULATION.ADVICE", "text": "It 's probably not advisable to use it ." }]

Risks

/v1/risk

This sentence-level service detects expressions that describe or refer to risk and danger, either directly or indirectly. The response contains only 'positive' matches: if no risk expressions are detected, the response is hence []. Any identified subtypes of risk are denoted with the dot operator (.) (e.g. RISK.SUBTYPE).

POST  /v1/risk

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 8, "sentenceIndex": 0, "riskType": "RISK", "text": "Your plan sounds plain dangerous in my mind." }]

Intent

/v1/intent

This sentence-level service detects expressions pertaining to intent, intentions, plans, and decisions that can be detected in text. The response contains only 'positive' matches: if no intent expressions are detected, the response is []. Any identified subtypes of intent are denoted with the dot operator (.) (e.g. INTENT.SUBTYPE).

POST  /v1/intent

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 11, "sentenceIndex": 0, "intentType": "INTENT.DECISION", "text": "I have made a decision to purchase the new improved camera model." }]

Advertisements

/v1/ad

Due to the fact that advertisements are spammy and almost invariably positive, they can skew sentiment measurements in a harmful way. This service allows you to detect texts that are or resemble advertisements. The returned analysis offers advertisement type labels (AD vs. NOT_AD) as well as confidence values from the predictions.

Returns an advertisement prediction for the entire input text (document-level advertisement detection).

POST  /v1/ad

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "score": { "label": "AD", "confidence": 1 } }

Comparisons

/v1/comparison

This sentence-level service detects comparative expressions. The response contains only 'positive' matches: if no comparative expressions are detected, the response is []. Any identified finer-grained comparative expressions are denoted with the dot operator (.) (e.g. COMPARISON.SUBTYPE).

POST  /v1/comparison

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 9, "sentenceIndex": 0, "comparisonType": "COMPARISON", "text": "Scala is much better than any other programming language ." }]

Named Entities

/v1/namedentity

This service detects expressions in the text snippet that refer explicitly or implicitly to

  • people and humans in general (PEOPLE)
  • places and locations (LOCATION)
  • organisations and companies (ORGANISATION)
  • times and dates (TIMEDATE)
  • monetary issues (MONEY)

For each identified expression (which can be a simple or complex Noun Phrase, Adjective Phrase, or Adverb Phrase), the detected Named Entity types are ranked by their salience (most salient first).

POST  /v1/namedentity

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "head": "Hollande", "headIndex": 5, "start": 0, "end": 5, "sentence": "The new French President Francois Hollande wants a '' growth pact '' in Europe - a set of reforms designed to boost European economies and mitigate the pain caused by government spending cuts across the continent .", "sentenceHtml": "The new French President Francois Hollande wants a '' growth pact '' in Europe - a set of reforms designed to boost European economies and mitigate the pain caused by government spending cuts across the continent .", "text": "The new French President Francois Hollande", "namedEntityTypes": ["PEOPLE"], "confidence": 0.994, "textTopicality": { "syntaxSalience": 1, "textPosition": 0.139 } }, { "head": "area", "headIndex": 7, "start": 6, "end": 15, "sentence": "The three lifeboats have been searching an area 25 miles ( 40km ) south of Wick , in the Beatrice oil field , for the two crew who remain missing .", "sentenceHtml": "The three lifeboats have been searching an area 25 miles ( 40km ) south of Wick , in the Beatrice oil field , for the two crew who remain missing .", "text": "an area 25 miles ( 40km ) south of Wick", "namedEntityTypes": ["LOCATION"], "confidence": 0.91, "textTopicality": { "syntaxSalience": 0.95, "textPosition": 0.233 } }, { "head": "Co-op", "headIndex": 1, "start": 0, "end": 1, "sentence": "The Co-op will pay GBP350m upfront and up to an additional # 400m based on the performance of the combined business .", "sentenceHtml": "The Co-op will pay GBP350m upfront and up to an additional # 400m based on the performance of the combined business .", "text": "The Co-op", "namedEntityTypes": ["ORGANISATION"], "confidence": 0.996, "textTopicality": { "syntaxSalience": 0.5, "textPosition": 0.05 } }, { "head": "shares", "headIndex": 31, "start": 30, "end": 31, "sentence": "The resolution for change was filed by Christian Brothers Investment Services ( CBIS ) and members of the Local Authority Pension Fund Forum ( LAPFF ) , organizations that own B shares .", "sentenceHtml": "The resolution for change was filed by Christian Brothers Investment Services ( CBIS ) and members of the Local Authority Pension Fund Forum ( LAPFF ) , organizations that own B shares .", "text": "B shares", "namedEntityTypes": ["MONEY"], "confidence": 0.924, "textTopicality": { "syntaxSalience": 0.95, "textPosition": 0.969 } }]

Part-of-Speech Tags

/v1/postag

This service assigns word class types to individual words in the text snippet. The tagset used is largely compatible with the Penn Treebank Tagset.

POST  /v1/postag

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "posTag": "PRP", "posTaggedWord": "I/PRP", "sentenceIndex": 0, "stem": "I|i", "text": "I", "wordIndex": 0 }, { "posTag": "MD", "posTaggedWord": "might/MD", "sentenceIndex": 0, "stem": "might|may", "text": "might", "wordIndex": 1 }, { "posTag": "VB", "posTaggedWord": "buy/VB", "sentenceIndex": 0, "stem": "buy", "text": "buy", "wordIndex": 2 }, { "posTag": "DT", "posTaggedWord": "a/DT", "sentenceIndex": 0, "stem": "a", "text": "a", "wordIndex": 3 }, { "posTag": "NNP", "posTaggedWord": "MacBookPro/NNP", "sentenceIndex": 0, "stem": "MacBookPro|macbookpro", "text": "MacBookPro", "wordIndex": 4 }, { "posTag": ".", "posTaggedWord": "./.", "sentenceIndex": 0, "stem": ".", "text": ".", "wordIndex": 5 }]

Phrase Chunks

/v1/chunkparse

This service detects the boundaries of basic shallow syntactic phrases in the text snippet. For each sentence, simple non-recursive Noun Phrase (NP) and Verb Group (VG) constituents are provided.

POST  /v1/chunkparse

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "chunk": { "chunkType": "", "end": 0, "sentenceIndex": 0, "start": 0, "text": "The" }, "head": { "posTag": "DT", "posTaggedWord": "The/DT", "stem": "The", "text": "The", "wordIndex": 0 } }, { "chunk": { "chunkType": "", "end": 1, "sentenceIndex": 0, "start": 1, "text": "latest" }, "head": { "posTag": "JJS", "posTaggedWord": "latest/JJS", "stem": "late", "text": "latest", "wordIndex": 1 } }, { "chunk": { "chunkType": "NP", "end": 2, "sentenceIndex": 0, "start": 0, "text": "The latest patch" }, "head": { "posTag": "NN", "posTaggedWord": "patch/NN", "stem": "patch", "text": "patch", "wordIndex": 2 } }, { "chunk": { "chunkType": "", "end": 3, "sentenceIndex": 0, "start": 3, "text": "will" }, "head": { "posTag": "MD", "posTaggedWord": "will/MD", "stem": "will", "text": "will", "wordIndex": 3 } }, { "chunk": { "chunkType": "", "end": 4, "sentenceIndex": 0, "start": 4, "text": "probably" }, "head": { "posTag": "RB", "posTaggedWord": "probably/RB", "stem": "probably", "text": "probably", "wordIndex": 4 } }, { "chunk": { "chunkType": "VP", "end": 5, "sentenceIndex": 0, "start": 3, "text": "will probably solve" }, "head": { "posTag": "VB", "posTaggedWord": "solve/VB", "stem": "solve", "text": "solve", "wordIndex": 5 } }, { "chunk": { "chunkType": "", "end": 6, "sentenceIndex": 0, "start": 6, "text": "all" }, "head": { "posTag": "PDT", "posTaggedWord": "all/PDT", "stem": "all", "text": "all", "wordIndex": 6 } }, { "chunk": { "chunkType": "", "end": 7, "sentenceIndex": 0, "start": 7, "text": "your" }, "head": { "posTag": "PRP$", "posTaggedWord": "your/PRP$", "stem": "your", "text": "your", "wordIndex": 7 } }, { "chunk": { "chunkType": "NP", "end": 8, "sentenceIndex": 0, "start": 6, "text": "all your problems" }, "head": { "posTag": "NNS", "posTaggedWord": "problems/NNS", "stem": "problem", "text": "problems", "wordIndex": 8 } }, { "chunk": { "chunkType": "", "end": 9, "sentenceIndex": 0, "start": 9, "text": "." }, "head": { "posTag": ".", "posTaggedWord": "./.", "stem": ".", "text": ".", "wordIndex": 9 } }]

Dependency Parses

/v1/depparse

This service analyses the grammatical structure of each sentence in the text snippet. For each sentence, typed syntactic dependencies between individual words are provided. The parses and the typed dependencies used resemble the labels and types described in the Cambridge Grammar of the English Language.

POST  /v1/depparse

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "dependency": { "predicate": "nsubj(got, I)", "relation": "nsubj" }, "dependent": { "text": "I", "stem": "I|i", "wordIndex": 0 }, "governor": { "text": "got", "stem": "got|get", "wordIndex": 1 } }, { "dependency": { "predicate": "(root, got)", "relation": "" }, "dependent": { "text": "got", "stem": "got|get", "wordIndex": 1 } }, { "dependency": { "predicate": "det(camera, a)", "relation": "det" }, "dependent": { "text": "a", "stem": "a", "wordIndex": 2 }, "governor": { "text": "camera", "stem": "camera", "wordIndex": 4 } }, { "dependency": { "predicate": "amod(camera, new)", "relation": "amod" }, "dependent": { "text": "new", "stem": "new", "wordIndex": 3 }, "governor": { "text": "camera", "stem": "camera", "wordIndex": 4 } }, { "dependency": { "predicate": "dobj(got, camera)", "relation": "dobj" }, "dependent": { "text": "camera", "stem": "camera", "wordIndex": 4 }, "governor": { "text": "got", "stem": "got|get", "wordIndex": 1 } }, { "dependency": { "predicate": "rel(takes, which)", "relation": "rel" }, "dependent": { "text": "which", "stem": "which", "wordIndex": 5 }, "governor": { "text": "takes", "stem": "takes|take", "wordIndex": 6 } }, { "dependency": { "predicate": "rcmod(camera, takes)", "relation": "rcmod" }, "dependent": { "text": "takes", "stem": "takes|take", "wordIndex": 6 }, "governor": { "text": "camera", "stem": "camera", "wordIndex": 4 } }, { "dependency": { "predicate": "amod(photos, brilliant)", "relation": "amod" }, "dependent": { "text": "brilliant", "stem": "brilliant", "wordIndex": 7 }, "governor": { "text": "photos", "stem": "photos|photo", "wordIndex": 8 } }, { "dependency": { "predicate": "dobj(takes, photos)", "relation": "dobj" }, "dependent": { "text": "photos", "stem": "photos|photo", "wordIndex": 8 }, "governor": { "text": "takes", "stem": "takes|take", "wordIndex": 6 } }, { "dependency": { "predicate": "(root, .)", "relation": "" }, "dependent": { "text": ".", "stem": ".", "wordIndex": 9 } }]

Text Summaries

/v1/summary

This service generates a summary from the input text. The summary consists of sentences delimited by \n.

POST  /v1/summary

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
ratio0.6number, optionalThe size of the summary (i.e. the proportion of the full input text) that is included in the summary. The range is 0 ≤ ratio ≤ 1.0 where 0 returns all sentences and 1.0 includes only the most salient sentence(s) in the input text.
Response 200
[{ "summary": "Charities criticise UK for ending humanitarian aid\nCharities have criticised the UK after the govt announced it would stop direct aid to Peru in 2019.\n UK ministers said their relationship with Peru is more about trade and not development as such." }]

Language Detection

/v1/langdetect

This service returns an ISO 639-1 natural language code for the input text.

POST  /v1/langdetect

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "iso6391": "en" }

Analysis Recipes

/v1/analysis-recipes

☆☆☆PREMIUM USER☆☆☆

Using this end point you can access a library of predefined composite analysis recipes (cf. presets, patches, combos) which fuse multiple analysis requests into a single one (cf. multicall requests, joint requests, multibody responses). For example, you could execute both sentiment and emotion analysis with a single joint request instead of calling the sentiment and emotion end points individually.

Coverage

The recipes support analysis combinations that are popular amongst our users. If you require custom recipes beyond what is listed in the present API documentation, please contact us.

Analysis Groups

The analysis recipes combine individual single-call analysis end points in the form of higher-level, thematically organised analysis groups as follows:

Analysis GroupSingle-call End Points SubsumedDescription
affect /emotion /sentiment All analyses that involve subjective, non-factual, and affective information.
future /intent /risk /speculation All analyses that involve irrealis expressions and future-looking statements.
referents /namedentity All analyses that involve entity mentions and references at the term/keyword/entity level.
topics /topic All analyses that involve higher-level topics, subject headings, and themes.

The names of the analysis recipes are composed of these analysis group names.

Response Fields

The response fields are compatible with those returned by the single-call end points described elsewhere in the present API documentation. However, because the responses from joint multicall requests may include data spanning multiple structural levels in text, the following additional structural wrapper fields are used across the analyses (where relevant):

Structural Wrapper in ResponseDescription
document The document-level analyses executed.
entity A list of entity mentions, each containing the entity-level analyses executed.
namedentity A list of Named Entities, each containing the entity-level analyses executed.
sentence A list of sentences, each containing the sentence-level analyses executed.
Basic, holistic affective signals at the document level.

POST  /v1/analysis-recipes/affect-1

Analyses ExecutedStructural Wrappers
emotion sentiment document 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {} } }
Basic, holistic affective signals at the document, entity, and sentence levels.

POST  /v1/analysis-recipes/affect-2

Analyses ExecutedStructural Wrappers
emotion sentiment document entity sentence 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {} }, "entity": [{ "sentiment": {} }], "sentence": [ { "emotion": [], "sentiment": {} } ] }
Basic, holistic affective signals alongside entity profiles in documents.

POST  /v1/analysis-recipes/affect-referents-1

Analyses ExecutedStructural Wrappers
namedentity sentiment document namedentity 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "sentiment": {} }, "namedentity": [] }
Basic, holistic affective signals alongside entity and topic profiles in documents.

POST  /v1/analysis-recipes/affect-referents-topics-1

Analyses ExecutedStructural Wrappers
emotion namedentity sentiment topic document namedentity 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {}, "topic": [] }, "namedentity": [] }
Basic, holistic affective and topic signals at the document level.

POST  /v1/analysis-recipes/affect-topics-1

Analyses ExecutedStructural Wrappers
sentiment topic document 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "sentiment": {}, "topic": [] } }
Basic holistic, affective and topic signals at the document level.

POST  /v1/analysis-recipes/affect-topics-2

Analyses ExecutedStructural Wrappers
emotion sentiment topic document 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {}, "topic": [] } }
Basic forward-looking signals at the sentence level.

POST  /v1/analysis-recipes/future-1

Analyses ExecutedStructural Wrappers
intent risk speculation sentence 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "sentence": [ { "intentType": "INTENT.DECISION", "riskType": "RISK", "speculationType": "OTHER" } ] }
Basic non-affective entity and topic profiles in documents.

POST  /v1/analysis-recipes/referents-topics-1

Analyses ExecutedStructural Wrappers
namedentity topic document namedentity 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "topic": [] }, "namedentity": [] }

Resources: Sentiment

/v1/resources/lexicons/sentiment/{lexicon}

This end point allows you to manage the underlying lexical resources that are used for the sentiment analysis on your account. By fine-tuning and customising sentiment lexica (adjectives, adverbs, nouns, verbs), you can adapt the sentiment analysis to a particular genre, domain, topic, or use case beyond the default, generic, general-purpose resources.

POST  /v1/resources/lexicons/sentiment/{lexicon}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
Attributes
NameExampleTypeDescription
textquasicoolstring, requiredThe entry to be stored in the lexicon.
polarityposstring, requiredSentiment polarity p of the lexicon entry where p ∈ { pos | ntr | neg }.
reverseequstring, requiredSentiment reversal r of the lexicon entry where r ∈ { rev | equ }.
Response 201

GET  /v1/resources/lexicons/sentiment/{lexicon}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
Response 200
       [{
        "text": "quasi-intelligent",
        "polarity": "pos",
        "id": "51b0630a7a233d39005ecc1e"
       }, {
        "text": "unemployment",
        "polarity": "ntr",
        "id": "51b0630a7a233d39005ecc1e"
       }]

GET  /v1/resources/lexicons/sentiment/{lexicon}/{objectID}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "text": "quasi-intelligent",
        "polarity": "pos",
        "id": "51b0630a7a233d39005ecc1e"
       }

DELETE  /v1/resources/lexicons/sentiment/{lexicon}/{objectID}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
objectIDstring, requiredUnique ID of resource.
Response 200

Resources: Topics

/v1/resources/topics/keywords

This end point allows you to expand the underlying resources that are used for the topic classification services on your account. To go beyond the default, generic, general-purpose topic classifiers (see the /topic section), you can upload simple weighted expressions for specific words and phrases to guarantee crisp, unconditional topic tags.

POST  /v1/resources/topics/keywords

Attributes
NameExampleTypeDescription
textroasted peanutsstring, requiredThe expression (word or phrase) to match.
classLabelYUMMYstring, requiredThe label (tag) to apply to the matched text.
weight65number, optionalThe weight of the matched text. Positive values boost and negative values suppress the importance of the class label in topic tagging.
Response 201

GET  /v1/resources/topics/keywords

Response 200
       [{
        "id": "57a08990954324034f92fe0e",
        "classLabel": "YUMMY",
        "text": "roasted peanuts",
        "weight": 400
       }, {
        "id": "57a089ab954324034f92fe10",
        "classLabel": "YUCKY",
        "text": "salted peanuts",
        "weight": 12
       }]

GET  /v1/resources/topics/keywords/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "id": "57a08990954324034f92fe0e",
        "classLabel": "YUMMY",
        "text": "roasted peanuts",
        "weight": 400
       }

DELETE  /v1/resources/topics/keywords/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200

Resources: Named Entities

/v1/resources/namedentity/assertions

This end point allows you to manage the resources that are used for the Named Entity recognition services on your account. The resources support simple assertions that match specific words and phrases to guarantee crisp, unconditional Named Entity tags. Assertion-based matching takes priority over the default, generic, general-purpose Named Entity classifiers (see the /namedentity section).

POST  /v1/resources/namedentity/assertions

Attributes
NameExampleTypeDescription
texttext analyticsstring, requiredThe text (word or phrase) to match.
classLabelTECH.COOLstring, requiredThe label (tag) to apply to all Named Entities that match the specified text.
Response 201

GET  /v1/resources/namedentity/assertions

Response 200
       [{
        "id": "5784e2759543246479a8633e",
        "classLabel": "TECH.COOL",
        "text": "text analytics"
       }, {
        "id": "5784e24d9543246479a8633a",
        "classLabel": "ORG.COMPANY.EXCELLENT",
        "text": "TheySay"
       }]

GET  /v1/resources/namedentity/assertions/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "id": "5784e2759543246479a8633e",
        "classLabel": "TECH.COOL",
        "text": "text analytics"
       }

DELETE  /v1/resources/namedentity/assertions/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200

Resources: Entity Taxonomies

/v1/resources/taxonomies/entity

This end point allows you to manage the taxonomic resources that are used in the entity categorisation on your account. By adding pattern matching rules for taxonomic categories, you can categorise entity mentions into any desired taxonomic levels beyond the default head noun-based grouping.

POST  /v1/resources/taxonomies/entity

Attributes
NameExampleTypeDescription
matchPattern(price(s)?|bill(s)?|offer(s)?|expensive|rip(-| )?off)string, requiredA regex pattern for capturing entity mentions.
categoryPRICEstring, requiredThe taxonomic category under which matched entity mentions should be categorised.
Response 201

GET  /v1/resources/taxonomies/entity

Response 200
       [{
        "matchPattern": "(beer|lager|bitter)",
        "category": "FOOD.DRINK",
        "id": "51b0781f7a233d48005ecc20"
       }, {
        "matchPattern": "pizza(s)?",
        "category": "FOOD.PIZZA",
        "id": "51b0780a7a233d4e005ecc1f"
       }]

GET  /v1/resources/taxonomies/entity/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "matchPattern": "(beer|lager|bitter)",
        "category": "FOOD.DRINK",
        "id": "51b0781f7a233d48005ecc20"
       }

DELETE  /v1/resources/taxonomies/entity/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200

Account Usage

/v1/usagestats

You can monitor your API usage within a specific time period between two time stamps. The timestamps expect values that are compliant with the [W3C](http://www.w3.org/TR/NOTE-datetime) date and time format.

GET  /v1/usagestats

Parameters
NameExampleTypeDescription
from2013-02-01string, requiredThe W3C start value for the query.
to2013-02-13string, optionalThe W3C end value for the query. If omitted, defaults to now.
groupbydatestring, optionalSpecify the fields to group the results by. Possible values: date, method, path, status, ip. These can be combined as a comma separated list, e.g. groupby=date,ip. The all value can be used a shorthand. Defaults to date.
aggregatedurationstring, optionalSpecify the aggregates. Possible values count, length, duration. These can be combined as a comma separated list, e.g. aggregate=length,duration. The all value can be used a shorthand. Defaults to count.
Response 200
{ "username": "yourUserName", "from": "2013-02-06T00:00:00.000Z", "to": "2013-02-13T00:00:00.000Z", "requestCount": 193, "dailyUsage": [{ "date": "2013-02-06T00:00:00.000Z", "requestCount": 0 }, { "date": "2013-02-07T00:00:00.000Z", "requestCount": 3 }, { "date": "2013-02-08T00:00:00.000Z", "requestCount": 97 }, { "date": "2013-02-09T00:00:00.000Z", "requestCount": 0 }, { "date": "2013-02-10T00:00:00.000Z", "requestCount": 0 }, { "date": "2013-02-11T00:00:00.000Z", "requestCount": 15 }, { "date": "2013-02-12T00:00:00.000Z", "requestCount": 72 }, { "date": "2013-02-13T00:00:00.000Z", "requestCount": 6 } ] }

Version 2 API

Sentiment: English

/v2/sentiment

Sentiment, a dimension of non-factuality in language that is closely related to subjectivity/affect/emotion/moods/feelings, reflects psychological evaluation with the following fundamental poles:

  • positive (~ good / pros / favourable / desirable / recommended / thumbs up /...) vs
  • negative (~ bad / cons / unfavourable / undesirable / not recommended / thumbs down /...)

You can use the Sentiment Analysis service to discover and score deep, fine-grained sentiments and opinions in text. The analysis, output by a human-like sentiment reasoning algorithm, captures both explicit "author sentiment" as well as general, implicit "reader-sentiment" beyond opinions that ultimately stems from affective commons sense as well as issues and events that are generally considered to be good vs. bad in the world.

The returned analysis includes majority sentiment labels, fine-grained 3-way positive/neutral/negative percentage scores, and other useful auxiliary fields.

Returns sentiment information about the entire text (document-level sentiment analysis).

POST  /v2/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
{ "sentiment": { "label": "POSITIVE", "positive": 0.941, "negative": 0.0, "neutral": 0.059 }, "wordCount": 12 }
Returns sentiment information about each sentence in the text (sentence-level sentiment analysis).

POST  /v2/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
[{ "sentiment": { "label": "POSITIVE", "positive": 0.787, "negative": 0.16, "neutral": 0.053, "confidence": 0.668 }, "start": 0, "end": 36, "sentenceIndex": 0, "text": "The new French President Francois Hollande wants a '' growth pact '' in Europe - a set of reforms designed to boost European economies and mitigate the pain caused by government spending cuts across the continent ." }, { "sentiment": { "label": "NEGATIVE", "positive": 0.347, "negative": 0.627, "neutral": 0.026, "confidence": 0.614 }, "start": 37, "end": 68, "sentenceIndex": 1, "text": "All the bad loans made by eurozone banks may need to be cleaned up ( by injecting money into the banks ) because many national governments probably can not afford it ." }]
Returns sentiment information about each individual entity (term, keyword) mentioned in the text (entity-level sentiment analysis).

POST  /v2/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelentitystring, requiredEnables entity-level sentiment analysis.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
targetsmarket|business|opportunity|coststring, optionalMatch expression for target entities.
matchingheadstring, optionalMatching mode.
Response 200
[{ "sentiment": { "label": "POSITIVE", "positive": 1.0, "negative": 0.0, "neutral": 0.0, "confidence": 0.756 }, "start": 2, "end": 2, "sentence": "'' This collaboration is sending a strong message to all the spammers : Stop sending us spam .", "sentenceHtml": "'' This <span class=\"entityMention\">collaboration</span> is sending a strong message to all the spammers : Stop sending us spam .", "text": "collaboration", "headNoun": "collaboration", "headNounIndex": 2, "salience": 1.0 }, { "sentiment": { "label": "NEGATIVE", "positive": 0.412, "negative": 0.588, "neutral": 0, "confidence": 0.689 }, "start": 11, "end": 11, "sentence": "'' This collaboration is sending a strong message to all the spammers : Stop sending us spam .", "sentenceHtml": "'' This collaboration is sending a strong message to all the <span class=\"entityMention\">spammers</span> : Stop sending us spam .", "text": "spammers", "headNoun": "spammers", "headNounIndex": 11, "salience": 0.7 }]
Returns sentiment information about aggregated entities (terms, keywords) mentioned in the text. Individual entity mentions are grouped using lowercase head noun matching and scored using weighted sentiment scores.

POST  /v2/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelentityaggregatestring, requiredEnables sentiment analysis for entity aggregates.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
[{ "entity": "osborne", "frequency": 2, "sentiment": { "label": "NEGATIVE", "positive": 0.0, "negative": 0.96, "neutral": 0.04, "confidence": 0.801 }, "salience": 1.0, "mentions": [{ "sentiment": { "label": "NEGATIVE", "positive": 0.0, "negative": 0.851, "neutral": 0.149, "confidence": 0.775 }, "start": 0, "end": 1, "sentence": "Mr Osborne said the banking system was not working for its customers .", "sentenceHtml": " <span class=\"entityMention\">Mr Osborne</span> said the banking system was not working for its customers .", "text": "Mr Osborne", "headNoun": "Osborne", "headNounIndex": 1, "salience": 1.0 }, { "sentiment": { "label": "NEGATIVE", "positive": 0.0, "negative": 0.861, "neutral": 0.139, "confidence": 0.827 }, "start": 13, "end": 13, "sentence": "Osborne also said that banks had failed to take responsibility for their actions .", "sentenceHtml": " <span class=\"entityMention\">Osborne</span> also said that banks had failed to take responsibility for their actions .", "text": "Osborne", "headNoun": "Osborne", "headNounIndex": 13, "salience": 1.0 }] }]
Returns sentiment information about detailed relations between entities (terms, keywords) mentioned in the text.

POST  /v2/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelentityrelationstring, requiredEnables relational entity-level sentiment analysis.
Response 200
[{ "entity1": { "head": "Avanesov", "headIndex": 2, "text": "Russian Georgiy Avanesov" }, "entity2": { "head": "botnet", "headIndex": 17, "text": "Bredolab botnet" }, "sentiment": { "label": "NEGATIVE", "positive": 0.209, "negative": 0.523, "neutral": 0.268 }, "salience": 0.243, "sentence": "Russian Georgiy Avanesov was in May sentenced to four years in jail for being behind the Bredolab botnet which was believed to have been generating more than # 80,000 a month in revenue .", "sentenceHtml": " <span class=\"entity1\">Russian Georgiy Avanesov</span> was in May sentenced to four years in jail for being behind the <span class=\"entity2\">Bredolab botnet</span> which was believed to have been generating more than # 80,000 a month in revenue ." }, { "entity1": { "head": "Avanesov", "headIndex": 2, "text": "Russian Georgiy Avanesov" }, "entity2": { "head": "revenue", "headIndex": 32, "text": "revenue" }, "sentiment": { "label": "POSITIVE", "positive": 0.377, "negative": 0.314, "neutral": 0.309 }, "salience": 0.155, "sentence": "Russian Georgiy Avanesov was in May sentenced to four years in jail for being behind the Bredolab botnet which was believed to have been generating more than # 80,000 a month in revenue .", "sentenceHtml": " <span class=\"entity1\">Russian Georgiy Avanesov</span> was in May sentenced to four years in jail for being behind the Bredolab botnet which was believed to have been generating more than # 80,000 a month in <span class=\"entity2\">revenue</span> ." }]
Returns information about the flow of sentiment through the text (document-level sentiment timeline analysis). The analysis covers contextual sentence-level sentiment labels and positional co-ordinates for individual words in the text which you can use to plot the temporal development (or flow) of sentiment through the text.

POST  /v2/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelwordstring, requiredEnables document-level sentiment flow analysis.
Response 200
[{ "sentiment": { "label": "NEGATIVE", "timelineY": -1.0 }, "wordIndex": 0, "text": "There" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.004 }, "wordIndex": 1, "text": "have" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.008 }, "wordIndex": 2, "text": "been" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.012 }, "wordIndex": 3, "text": "clashes" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.0170000000000001 }, "wordIndex": 4, "text": "throughout" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.0210000000000001 }, "wordIndex": 5, "text": "the" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.025 }, "wordIndex": 6, "text": "night" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.029 }, "wordIndex": 7, "text": "in" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.033 }, "wordIndex": 8, "text": "many" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.037 }, "wordIndex": 9, "text": "parts" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.042 }, "wordIndex": 10, "text": "of" }, { "sentiment": { "label": "NEGATIVE", "timelineY": -1.046 }, "wordIndex": 11, "text": "Syria" }, { "sentiment": { "label": "NEUTRAL", "timelineY": -1.046 }, "wordIndex": 12, "text": "." }]

Sentiment: Multilingual

/v2/multilingual/sentiment

Beyond English, the API offers sentiment analysis for German (de) and Spanish (es) at the document and the sentence levels.

Returns sentiment information about the entire text (document-level sentiment analysis).

POST  /v2/multilingual/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
languagedestring, requiredThe ISO 639-1 natural language code for the input text. See http://www.loc.gov/standards/iso639-2/php/code_list.php for the codes.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
{ "sentiment": { "label": "POSITIVE", "positive": 0.941, "negative": 0.0, "neutral": 0.059 }, "wordCount": 12 }
Returns sentiment information about each sentence in the text (sentence-level sentiment analysis).

POST  /v2/multilingual/sentiment

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
languagedestring, requiredThe ISO 639-1 natural language code for the input text. See http://www.loc.gov/standards/iso639-2/php/code_list.php for the codes.
bias{"positive"3.5, "neutral":2.7, "negative":18 }object, optionalSentiment coefficient d (0 ≤ d ≤ 100) to control the (in)sensitivity of the sentiment analysis towards sentiment polarity p where p ∈ { positive | neutral | negative }).
Response 200
[{ "sentiment": { "label": "NEGATIVE", "positive": 0.147, "negative": 0.778, "neutral": 0.076 }, "start": 0, "end": 14, "sentenceIndex": 0, "text": "Rund 70 Flüchtlinge haben in Heidenau gegen die schlechten Bedingungen in ihrer Unterkunft protestiert ." },{ "sentiment": { "label": "NEGATIVE", "positive": 0.256, "negative": 0.573, "neutral": 0.17 }, "start": 15, "end": 22, "sentenceIndex": 1, "text": "USA bereiten sich auf 10.000 syrische Flüchtlinge vor" }]

Emotions: Unbounded

/v2/emotion

Beyond positive vs. negative sentiment polarity, a vast range of psychological dimensions exist in the realm of emotions/moods/feelings/affect. You can use the Emotion Analysis service to project the text onto a fine-grained, multi-dimensional emotion space which is more natural than a singular majority label. By default, all emotion scores are unbounded (unnormalised, unscaled). The returned analysis lists emotion dimension labels, each with a confidence value from the prediction, and covers the following basic emotion dimensions:

  • anger1D - 1-dimensional anger scale (> 0).
  • fear1D - 1-dimensional fear scale (> 0).
  • shame1D - 1-dimensional shame scale (> 0).
  • surprise1D - 1-dimensional surprise scale (> 0).
  • calm2D - 2-dimensional scale between calmness (> 0) vs. agitation (< 0).
  • happy2D - 2-dimensional scale between happiness (> 0) vs. sadness (< 0).
  • like2D - 2-dimensional scale between liking (> 0) vs. disliking/disgust (< 0).
  • sure2D - 2-dimensional scale between certainty/sureness (> 0) vs. uncertainty/unsureness (< 0).
Returns unbounded (unnormalised, unscaled) emotion dimensions for the entire input text (document-level emotion analysis).

POST  /v2/emotion

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "emotions": [ { "dimension": "anger1D", "score": 1.667, "confidence": 0.217 }, { "dimension": "calm2D", "score": -0.478, "confidence": 0.032 }, { "dimension": "fear1D", "score": 0 }, { "dimension": "happy2D", "score": 0, "confidence": 0 }, { "dimension": "like2D", "score": -1.4, "confidence": 0.099 }, { "dimension": "shame1D", "score": 0, "confidence": 0 }, { "dimension": "sure2D", "score": -0.667, "confidence": 0.095 }, { "dimension": "surprise1D", "score": 0, "confidence": 0 } ]}
Returns unbounded (unnormalised, unscaled) emotion dimensions for each sentence in the input text (sentence-level emotion analysis).

POST  /v2/emotion

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
Response 200
[{ "emotions": [ { "dimension": "anger1D", "score": 5, "confidence": 0.217 }, { "dimension": "calm2D", "score": -3.9 }, { "dimension": "fear1D", "score": 0, "confidence": 0 }, { "dimension": "happy2D", "score": 0, "confidence": 0 }, { "dimension": "like2D", "score": -2.533, "confidence": 0.43 }, { "dimension": "shame1D", "score": 0, "confidence": 0 }, { "dimension": "sure2D", "score": 0, "confidence": 0 }, { "dimension": "surprise1D", "score": 0, "confidence": 0 } ], "start": 11, "end": 23, "sentenceIndex": 1, "text": "I have been called vile , villainous and evil for criticising her ." }, { "emotions": [ { "dimension": "anger1D", "score": 1.071, "confidence": 0.203 }, { "dimension": "calm2D", "score": -0.943, "confidence": 0.156 }, { "dimension": "fear1D", "score": 0.714, "confidence": 0.38 }, { "dimension": "happy2D", "score": -1.175, "confidence": 0.117 }, { "dimension": "like2D", "score": -0.536, "confidence": 0.223 }, { "dimension": "shame1D", "score": 0, "confidence": 0 }, { "dimension": "sure2D", "score": -0.286, "confidence": 0.09 }, { "dimension": "surprise1D", "score": 0.286, "confidence": 0.189 } ], "start": 14, "end": 24, "sentenceIndex": 1, "text": "I wonder how many times she cried and considered suicide ." }]

Emotions: Banded

/v2/emotion/bands

This alternative emotion analysis end point returns all emotion scores in a banded (normalised, discretised) form. The normalised bands are accompanied by scores (0, 5). The returned analysis lists emotion dimensions, each with one of the following labels and scores:

  • ABSENT - Score: 1. Range: (0, 0). Indicates that no emotion signal was detected.
  • WEAK - Score: 1. Range: (0.0, 0.2). Indicates extremely weak emotion signals.
  • FAIR - Score: 2. Range: (0.2, 0.4). Indicates weak emotion signals.
  • MODERATE - Score: 3. Range: (0.4, 0.6). Indicates fair (neither weak nor strong) emotion signals.
  • CONSIDERABLE - Score: 4. Range: (0.6, 0.8). Indicates strong emotion signals.
  • STRONG - Score: 5. Range: (0.8, 1.0). Indicates extremely strong emotion signals.
Returns banded banded (normalised, discretised) emotion dimensions for the entire input text (document-level emotion analysis).

POST  /v2/emotion/bands

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "emotions": [ { "dimension": "anger1D", "score": 3 }, { "dimension": "calm1D", "score": 3 }, { "dimension": "dislike1D", "score": 0 }, { "dimension": "fear1D", "score": 0 }, { "dimension": "happy1D", "score": 4 }, { "dimension": "like1D", "score": 2 }, { "dimension": "sad1D", "score": 2 }, { "dimension": "shame1D", "score": 0 }, { "dimension": "sure1D", "score": 0 }, { "dimension": "surprise1D", "score": 0 }, { "dimension": "uncalm1D", "score": 3 }, { "dimension": "unsure1D", "score": 2 } ]}
Returns banded (normalised, discretised) emotion dimensions for each sentence in the input text (sentence-level emotion analysis).

POST  /v2/emotion/bands

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
Response 200
[{ "emotions": [ { "dimension": "anger1D", "score": 5 }, { "dimension": "calm1D", "score": 0 }, { "dimension": "dislike1D", "score": 5 }, { "dimension": "fear1D", "score": 3 }, { "dimension": "happy1D", "score": 0 }, { "dimension": "like1D", "score": 0 }, { "dimension": "sad1D", "score": 0 }, { "dimension": "shame1D", "score": 0 }, { "dimension": "sure1D", "score": 0 }, { "dimension": "surprise1D", "score": 0 }, { "dimension": "uncalm1D", "score": 5 }, { "dimension": "unsure1D", "score": 0 } ], "start": 11, "end": 23, "sentenceIndex": 1, "text": "I have been called vile , villainous and evil for criticising her ." }, { "emotions": [ { "dimension": "anger1D", "score": 3 }, { "dimension": "calm1D", "score": 1 }, { "dimension": "dislike1D", "score": 3 }, { "dimension": "fear1D", "score": 3 }, { "dimension": "happy1D", "score": 1 }, { "dimension": "like1D", "score": 1 }, { "dimension": "sad1D", "score": 4 }, { "dimension": "shame1D", "score": 0 }, { "dimension": "sure1D", "score": 0 }, { "dimension": "surprise1D", "score": 1 }, { "dimension": "uncalm1D", "score": 4 }, { "dimension": "unsure1D", "score": 1 } ], "start": 14, "end": 24, "sentenceIndex": 1, "text": "I wonder how many times she cried and considered suicide ." }]

Topics

/v2/topic

This end point generates for a piece of text a fine-grained topic profile that comprises 40 generic, general-purpose topics and subject headings. With some topics, more specific subtopics are denoted with the dot operator (.) (e.g. FINANCE.FOREX). The returned topic distribution includes topics whose confidence levels were at least 0.5, and covers the following topic labels:

Topic LabelDescription
ACCIDENTSAll accidents
ACCIDENTS.AVIATIONPlane accidents
ACCIDENTS.TRAFFICTraffic accidents
ARTS_CULTUREarts, culture, cultural events, artists
BOOKS_LITERATUREbooks, literature, authors, bestsellers
BUSINESSbusiness
BUSINESS.EARNINGSBusiness earnings and results
BUSINESS.IPOInitial public offerings
BUSINESS.MERGERS_AND_ACQUISITIONScorporate mergers and acquisitions
CELEBRITIEScelebrities, celebrity culture, reality shows, talent shows
COMPUTINGcomputing, computers, IT, software, hardware, networks, IoT, operating systems, programming
COMPUTING.AIArtifical Intelligence
COMPUTING.BLOCKCHAINBlockchain computing
COMPUTING.CLOUDCloud computing
COMPUTING.IOTInternet of things
COMPUTING.SECURITYCybersecurity and hacking
CRIMEcrime, policing
DEFENCE_MILITARYdefence, military, army, war
EDUCATIONeducation, schools, universities
ELECTRONICSelectronics, consumer electronics, gadgets
EMERGENCIES_DISASTERSemergencies, catastrophes, natural disasters, man-made disasters, epidemics
EMPLOYMENT_WORKemployment, work, careers
ENERGYGeneral energy topics
ENERGY.NUCLEARNuclear energy
ENERGY.OILGASFossiil fuel, oil and gas topics
ENERGY.RENEWABLEGreen energy amd renewables
ENTERTAINMENTentertainment, showbiz, cinema, TV
ENVIRONMENTenvironment, environmental issues, environmentalism
FASHION_STYLEfashion, fashion designers, fashion brands, style
FINANCEfinance, investment, accounting
FINANCE.ALTCURRENCYAlternative currencies and cryptocurrencies
FINANCE.FOREXforeign exchange, currencies
FINANCE.MOVEMENTMovement in investments
FINANCE.RATINGRating upgrades and downgrades
FOOD_COOKINGfood, drink, cooking, cuisine
GAMINGgaming, video games, PC games, gaming platforms
HEALTHCARE_MEDICINEhealth, healthcare, medicine medical conditions, diseases
HEALTH_FITNESShealth, fitness, well-being, physical exercise
IMMIGRATIONimmigration, immigrants, refugees
INTELLECTUAL_PROPERTYntellectual property, copyright, patents, trademarks
LAWlaw, legal issues, litigation
MOTORINGmotoring, cars, motorcycles
PERSONAL_FINANCEPersonal finance
PHARMACEUTICALSPharmaceutical Industry issues
POLITICSpolitics, politicians, elections, governments
REAL_ESTATE_PROPERTYreal estate, property, housing
RELIGIONreligion, religious issues
RELIGION.CHRISTIANITY
RELIGION.ISLAM
RELIGION.JUDAISM
RETAILRetail business topics
SCIENCESciences
SCIENCE.BIOTECHBiotechnology
SCIENCE.NANOTECHNanotechnology
SOCIAL_MEDIASocial Media
SPORTSSports
TECHNOLOGYGeneral technology
TERRORISMterrorism
TRANSPORTATIONgeneral transportation, traffic, logistics
TRANSPORTATION.AIRAirlines and airfreight
TRANSPORTATION.MARITIMEShipping
TRANSPORTATION.RAILWAYRailways
TRANSPORTATION.ROADTraffic and road logistics
WEATHERWeather
Returns a topic profile for the entire input text (document-level topic classification).

POST  /v2/topic

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "scores": [ { "label": "FINANCE", "confidence": 0.782 }, { "label": "CURRENCY_MONEY", "confidence": 0.725 } ] }

Consumer Vulnerability

/v2/vulnerability

Consumer Vulnerability is a key concern for a number of industries, especially personal finance services. This end point detects potential signals that the author is in a vulnerable situation. Examples of such signals are engagement with healthcare services or illness, bereavement and relationship break-down.

Returns an array of signals detected within the text. This array will be empty if no signals identified. Currently only signal returned is 'VULNERABILITY'.

POST  /v2/vulnerability

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "vulnerability": [ "VULNERABILITY" ] }
Returns an array of objects, one for each sentence. Each sentence object will contain an array of any vulnerability signals detected within the text of that sentence. If no signal was detected within a sentence then this array will be empty.

POST  /v2/vulnerability

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
levelsentencestring, requiredEnables sentence-level analysis.
Response 200
[ { "vulnerability":[ "VULNERABILITY" ], "start":0, "end":5, "sentenceIndex":0, "text":"My father recently passed away ." }, { "vulnerability":[ ], "start":6, "end":11, "sentenceIndex":1, "text":"He was a good man. ." } ]

Speculation

/v2/speculation

Speculative language describes or refers directly or indirectly to irrealis events that are yet to happen. Speculative expressions can hence cover concepts as diverse as future, certainty, doubt, prediction, wanting, wishes, and waiting, to name a few. This service detects speculative expressions at the sentence level. The response contains only 'positive' matches: if no speculative content is detected, the response is [], accordingly. Any identified subtypes of speculation are denoted with the dot operator (.) (e.g. SPECULATION.SUBTYPE).

POST  /v2/speculation

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 8, "sentenceIndex": 0, "speculationType": "SPECULATION.ADVICE", "text": "It 's probably not advisable to use it ." }]

Risks

/v2/risk

This sentence-level service detects expressions that describe or refer to risk and danger, either directly or indirectly. The response contains only 'positive' matches: if no risk expressions are detected, the response is hence []. Any identified subtypes of risk are denoted with the dot operator (.) (e.g. RISK.SUBTYPE).

POST  /v2/risk

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 8, "sentenceIndex": 0, "riskType": "RISK", "text": "Your plan sounds plain dangerous in my mind." }]

Intent

/v2/intent

This sentence-level service detects expressions pertaining to intent, intentions, plans, and decisions that can be detected in text. The response contains only 'positive' matches: if no intent expressions are detected, the response is []. Any identified subtypes of intent are denoted with the dot operator (.) (e.g. INTENT.SUBTYPE).

POST  /v2/intent

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 11, "sentenceIndex": 0, "intentType": "INTENT.DECISION", "text": "I have made a decision to purchase the new improved camera model." }]

Advertisements

/v2/ad

Due to the fact that advertisements are spammy and almost invariably positive, they can skew sentiment measurements in a harmful way. This service allows you to detect texts that are or resemble advertisements. The returned analysis offers advertisement type labels (AD vs. NOT_AD) as well as confidence values from the predictions.

Returns an advertisement prediction for the entire input text (document-level advertisement detection).

POST  /v2/ad

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "score": { "label": "AD", "confidence": 1 } }

Comparisons

/v2/comparison

This sentence-level service detects comparative expressions. The response contains only 'positive' matches: if no comparative expressions are detected, the response is []. Any identified finer-grained comparative expressions are denoted with the dot operator (.) (e.g. COMPARISON.SUBTYPE).

POST  /v2/comparison

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "start": 0, "end": 9, "sentenceIndex": 0, "comparisonType": "COMPARISON", "text": "Scala is much better than any other programming language ." }]

Named Entities

/v2/namedentity

This service detects expressions in the text snippet that refer explicitly or implicitly to

  • people and humans in general (PEOPLE)
  • places and locations (LOCATION)
  • organisations and companies (ORGANISATION)
  • times and dates (TIMEDATE)
  • monetary issues (MONEY)

For each identified expression (which can be a simple or complex Noun Phrase, Adjective Phrase, or Adverb Phrase), the detected Named Entity types are ranked by their salience (most salient first).

POST  /v2/namedentity

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "head": "Hollande", "headIndex": 5, "start": 0, "end": 5, "sentence": "The new French President Francois Hollande wants a '' growth pact '' in Europe - a set of reforms designed to boost European economies and mitigate the pain caused by government spending cuts across the continent .", "sentenceHtml": "The new French President Francois Hollande wants a '' growth pact '' in Europe - a set of reforms designed to boost European economies and mitigate the pain caused by government spending cuts across the continent .", "text": "The new French President Francois Hollande", "namedEntityTypes": ["PEOPLE"], "confidence": 0.994, "textTopicality": { "syntaxSalience": 1, "textPosition": 0.139 } }, { "head": "area", "headIndex": 7, "start": 6, "end": 15, "sentence": "The three lifeboats have been searching an area 25 miles ( 40km ) south of Wick , in the Beatrice oil field , for the two crew who remain missing .", "sentenceHtml": "The three lifeboats have been searching an area 25 miles ( 40km ) south of Wick , in the Beatrice oil field , for the two crew who remain missing .", "text": "an area 25 miles ( 40km ) south of Wick", "namedEntityTypes": ["LOCATION"], "confidence": 0.91, "textTopicality": { "syntaxSalience": 0.95, "textPosition": 0.233 } }, { "head": "Co-op", "headIndex": 1, "start": 0, "end": 1, "sentence": "The Co-op will pay GBP350m upfront and up to an additional # 400m based on the performance of the combined business .", "sentenceHtml": "The Co-op will pay GBP350m upfront and up to an additional # 400m based on the performance of the combined business .", "text": "The Co-op", "namedEntityTypes": ["ORGANISATION"], "confidence": 0.996, "textTopicality": { "syntaxSalience": 0.5, "textPosition": 0.05 } }, { "head": "shares", "headIndex": 31, "start": 30, "end": 31, "sentence": "The resolution for change was filed by Christian Brothers Investment Services ( CBIS ) and members of the Local Authority Pension Fund Forum ( LAPFF ) , organizations that own B shares .", "sentenceHtml": "The resolution for change was filed by Christian Brothers Investment Services ( CBIS ) and members of the Local Authority Pension Fund Forum ( LAPFF ) , organizations that own B shares .", "text": "B shares", "namedEntityTypes": ["MONEY"], "confidence": 0.924, "textTopicality": { "syntaxSalience": 0.95, "textPosition": 0.969 } }]

Part-of-Speech Tags

/v2/postag

This service assigns word class types to individual words in the text snippet. The tagset used is largely compatible with the Penn Treebank Tagset.

POST  /v2/postag

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "posTag": "PRP", "posTaggedWord": "I/PRP", "sentenceIndex": 0, "stem": "I|i", "text": "I", "wordIndex": 0 }, { "posTag": "MD", "posTaggedWord": "might/MD", "sentenceIndex": 0, "stem": "might|may", "text": "might", "wordIndex": 1 }, { "posTag": "VB", "posTaggedWord": "buy/VB", "sentenceIndex": 0, "stem": "buy", "text": "buy", "wordIndex": 2 }, { "posTag": "DT", "posTaggedWord": "a/DT", "sentenceIndex": 0, "stem": "a", "text": "a", "wordIndex": 3 }, { "posTag": "NNP", "posTaggedWord": "MacBookPro/NNP", "sentenceIndex": 0, "stem": "MacBookPro|macbookpro", "text": "MacBookPro", "wordIndex": 4 }, { "posTag": ".", "posTaggedWord": "./.", "sentenceIndex": 0, "stem": ".", "text": ".", "wordIndex": 5 }]

Phrase Chunks

/v2/chunkparse

This service detects the boundaries of basic shallow syntactic phrases in the text snippet. For each sentence, simple non-recursive Noun Phrase (NP) and Verb Group (VG) constituents are provided.

POST  /v2/chunkparse

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "chunk": { "chunkType": "", "end": 0, "sentenceIndex": 0, "start": 0, "text": "The" }, "head": { "posTag": "DT", "posTaggedWord": "The/DT", "stem": "The", "text": "The", "wordIndex": 0 } }, { "chunk": { "chunkType": "", "end": 1, "sentenceIndex": 0, "start": 1, "text": "latest" }, "head": { "posTag": "JJS", "posTaggedWord": "latest/JJS", "stem": "late", "text": "latest", "wordIndex": 1 } }, { "chunk": { "chunkType": "NP", "end": 2, "sentenceIndex": 0, "start": 0, "text": "The latest patch" }, "head": { "posTag": "NN", "posTaggedWord": "patch/NN", "stem": "patch", "text": "patch", "wordIndex": 2 } }, { "chunk": { "chunkType": "", "end": 3, "sentenceIndex": 0, "start": 3, "text": "will" }, "head": { "posTag": "MD", "posTaggedWord": "will/MD", "stem": "will", "text": "will", "wordIndex": 3 } }, { "chunk": { "chunkType": "", "end": 4, "sentenceIndex": 0, "start": 4, "text": "probably" }, "head": { "posTag": "RB", "posTaggedWord": "probably/RB", "stem": "probably", "text": "probably", "wordIndex": 4 } }, { "chunk": { "chunkType": "VP", "end": 5, "sentenceIndex": 0, "start": 3, "text": "will probably solve" }, "head": { "posTag": "VB", "posTaggedWord": "solve/VB", "stem": "solve", "text": "solve", "wordIndex": 5 } }, { "chunk": { "chunkType": "", "end": 6, "sentenceIndex": 0, "start": 6, "text": "all" }, "head": { "posTag": "PDT", "posTaggedWord": "all/PDT", "stem": "all", "text": "all", "wordIndex": 6 } }, { "chunk": { "chunkType": "", "end": 7, "sentenceIndex": 0, "start": 7, "text": "your" }, "head": { "posTag": "PRP$", "posTaggedWord": "your/PRP$", "stem": "your", "text": "your", "wordIndex": 7 } }, { "chunk": { "chunkType": "NP", "end": 8, "sentenceIndex": 0, "start": 6, "text": "all your problems" }, "head": { "posTag": "NNS", "posTaggedWord": "problems/NNS", "stem": "problem", "text": "problems", "wordIndex": 8 } }, { "chunk": { "chunkType": "", "end": 9, "sentenceIndex": 0, "start": 9, "text": "." }, "head": { "posTag": ".", "posTaggedWord": "./.", "stem": ".", "text": ".", "wordIndex": 9 } }]

Dependency Parses

/v2/depparse

This service analyses the grammatical structure of each sentence in the text snippet. For each sentence, typed syntactic dependencies between individual words are provided. The parses and the typed dependencies used resemble the labels and types described in the Cambridge Grammar of the English Language.

POST  /v2/depparse

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
[{ "dependency": { "predicate": "nsubj(got, I)", "relation": "nsubj" }, "dependent": { "text": "I", "stem": "I|i", "wordIndex": 0 }, "governor": { "text": "got", "stem": "got|get", "wordIndex": 1 } }, { "dependency": { "predicate": "(root, got)", "relation": "" }, "dependent": { "text": "got", "stem": "got|get", "wordIndex": 1 } }, { "dependency": { "predicate": "det(camera, a)", "relation": "det" }, "dependent": { "text": "a", "stem": "a", "wordIndex": 2 }, "governor": { "text": "camera", "stem": "camera", "wordIndex": 4 } }, { "dependency": { "predicate": "amod(camera, new)", "relation": "amod" }, "dependent": { "text": "new", "stem": "new", "wordIndex": 3 }, "governor": { "text": "camera", "stem": "camera", "wordIndex": 4 } }, { "dependency": { "predicate": "dobj(got, camera)", "relation": "dobj" }, "dependent": { "text": "camera", "stem": "camera", "wordIndex": 4 }, "governor": { "text": "got", "stem": "got|get", "wordIndex": 1 } }, { "dependency": { "predicate": "rel(takes, which)", "relation": "rel" }, "dependent": { "text": "which", "stem": "which", "wordIndex": 5 }, "governor": { "text": "takes", "stem": "takes|take", "wordIndex": 6 } }, { "dependency": { "predicate": "rcmod(camera, takes)", "relation": "rcmod" }, "dependent": { "text": "takes", "stem": "takes|take", "wordIndex": 6 }, "governor": { "text": "camera", "stem": "camera", "wordIndex": 4 } }, { "dependency": { "predicate": "amod(photos, brilliant)", "relation": "amod" }, "dependent": { "text": "brilliant", "stem": "brilliant", "wordIndex": 7 }, "governor": { "text": "photos", "stem": "photos|photo", "wordIndex": 8 } }, { "dependency": { "predicate": "dobj(takes, photos)", "relation": "dobj" }, "dependent": { "text": "photos", "stem": "photos|photo", "wordIndex": 8 }, "governor": { "text": "takes", "stem": "takes|take", "wordIndex": 6 } }, { "dependency": { "predicate": "(root, .)", "relation": "" }, "dependent": { "text": ".", "stem": ".", "wordIndex": 9 } }]

Text Summaries

/v2/summary

This service generates a summary from the input text. The summary consists of sentences delimited by \n.

POST  /v2/summary

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
ratio0.6number, optionalThe size of the summary (i.e. the proportion of the full input text) that is included in the summary. The range is 0 ≤ ratio ≤ 1.0 where 0 returns all sentences and 1.0 includes only the most salient sentence(s) in the input text.
Response 200
[{ "summary": "Charities criticise UK for ending humanitarian aid\nCharities have criticised the UK after the govt announced it would stop direct aid to Peru in 2019.\n UK ministers said their relationship with Peru is more about trade and not development as such." }]

Language Detection

/v2/langdetect

This service returns an ISO 639-1 natural language code for the input text.

POST  /v2/langdetect

Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "iso6391": "en" }

Analysis Recipes

/v2/analysis-recipes

☆☆☆PREMIUM USER☆☆☆

Using this end point you can access a library of predefined composite analysis recipes (cf. presets, patches, combos) which fuse multiple analysis requests into a single one (cf. multicall requests, joint requests, multibody responses). For example, you could execute both sentiment and emotion analysis with a single joint request instead of calling the sentiment and emotion end points individually.

Coverage

The recipes support analysis combinations that are popular amongst our users. If you require custom recipes beyond what is listed in the present API documentation, please contact us.

Analysis Groups

The analysis recipes combine individual single-call analysis end points in the form of higher-level, thematically organised analysis groups as follows:

Analysis GroupSingle-call End Points SubsumedDescription
affect /emotion /sentiment All analyses that involve subjective, non-factual, and affective information.
future /intent /risk /speculation All analyses that involve irrealis expressions and future-looking statements.
referents /namedentity All analyses that involve entity mentions and references at the term/keyword/entity level.
topics /topic All analyses that involve higher-level topics, subject headings, and themes.

The names of the analysis recipes are composed of these analysis group names.

Response Fields

The response fields are compatible with those returned by the single-call end points described elsewhere in the present API documentation. However, because the responses from joint multicall requests may include data spanning multiple structural levels in text, the following additional structural wrapper fields are used across the analyses (where relevant):

Structural Wrapper in ResponseDescription
document The document-level analyses executed.
entity A list of entity mentions, each containing the entity-level analyses executed.
namedentity A list of Named Entities, each containing the entity-level analyses executed.
sentence A list of sentences, each containing the sentence-level analyses executed.
Basic, holistic affective signals at the document level.

POST  /v2/analysis-recipes/affect-1

Analyses ExecutedStructural Wrappers
emotion sentiment document 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {} } }
Basic, holistic affective signals at the document, entity, and sentence levels.

POST  /v2/analysis-recipes/affect-2

Analyses ExecutedStructural Wrappers
emotion sentiment document entity sentence 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {} }, "entity": [{ "sentiment": {} }], "sentence": [ { "emotion": [], "sentiment": {} } ] }
Basic, holistic affective signals alongside entity profiles in documents.

POST  /v2/analysis-recipes/affect-referents-1

Analyses ExecutedStructural Wrappers
namedentity sentiment document namedentity 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "sentiment": {} }, "namedentity": [] }
Basic, holistic affective signals alongside entity and topic profiles in documents.

POST  /v2/analysis-recipes/affect-referents-topics-1

Analyses ExecutedStructural Wrappers
emotion namedentity sentiment topic document namedentity 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {}, "topic": [] }, "namedentity": [] }
Basic, holistic affective and topic signals at the document level.

POST  /v2/analysis-recipes/affect-topics-1

Analyses ExecutedStructural Wrappers
sentiment topic document 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "sentiment": {}, "topic": [] } }
Basic holistic, affective and topic signals at the document level.

POST  /v2/analysis-recipes/affect-topics-2

Analyses ExecutedStructural Wrappers
emotion sentiment topic document 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "emotion": [], "sentiment": {}, "topic": [] } }
Basic forward-looking signals at the sentence level.

POST  /v2/analysis-recipes/future-1

Analyses ExecutedStructural Wrappers
intent risk speculation sentence 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "sentence": [ { "intentType": "INTENT.DECISION", "riskType": "RISK", "speculationType": "OTHER" } ] }
Basic non-affective entity and topic profiles in documents.

POST  /v2/analysis-recipes/referents-topics-1

Analyses ExecutedStructural Wrappers
namedentity topic document namedentity 
Attributes
NameExampleTypeDescription
text...your...text...string, requiredThe text that you want to analyse.
Response 200
{ "document": { "topic": [] }, "namedentity": [] }

Resources: Sentiment

/v2/resources/lexicons/sentiment/{lexicon}

This end point allows you to manage the underlying lexical resources that are used for the sentiment analysis on your account. By fine-tuning and customising sentiment lexica (adjectives, adverbs, nouns, verbs), you can adapt the sentiment analysis to a particular genre, domain, topic, or use case beyond the default, generic, general-purpose resources.

POST  /v2/resources/lexicons/sentiment/{lexicon}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
Attributes
NameExampleTypeDescription
textquasicoolstring, requiredThe entry to be stored in the lexicon.
polarityposstring, requiredSentiment polarity p of the lexicon entry where p ∈ { pos | ntr | neg }.
reverseequstring, requiredSentiment reversal r of the lexicon entry where r ∈ { rev | equ }.
Response 201

GET  /v2/resources/lexicons/sentiment/{lexicon}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
Response 200
       [{
        "text": "quasi-intelligent",
        "polarity": "pos",
        "id": "51b0630a7a233d39005ecc1e"
       }, {
        "text": "unemployment",
        "polarity": "ntr",
        "id": "51b0630a7a233d39005ecc1e"
       }]

GET  /v2/resources/lexicons/sentiment/{lexicon}/{objectID}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "text": "quasi-intelligent",
        "polarity": "pos",
        "id": "51b0630a7a233d39005ecc1e"
       }

DELETE  /v2/resources/lexicons/sentiment/{lexicon}/{objectID}

Parameters
NameTypeDescription
lexiconstring, requiredPossible values: adjectives , adverbs , nouns , verbs.
objectIDstring, requiredUnique ID of resource.
Response 200

Resources: Topics

/v2/resources/topics/keywords

This end point allows you to expand the underlying resources that are used for the topic classification services on your account. To go beyond the default, generic, general-purpose topic classifiers (see the /topic section), you can upload simple weighted expressions for specific words and phrases to guarantee crisp, unconditional topic tags.

POST  /v2/resources/topics/keywords

Attributes
NameExampleTypeDescription
textroasted peanutsstring, requiredThe expression (word or phrase) to match.
classLabelYUMMYstring, requiredThe label (tag) to apply to the matched text.
weight65number, optionalThe weight of the matched text. Positive values boost and negative values suppress the importance of the class label in topic tagging.
Response 201

GET  /v2/resources/topics/keywords

Response 200
       [{
        "id": "57a08990954324034f92fe0e",
        "classLabel": "YUMMY",
        "text": "roasted peanuts",
        "weight": 400
       }, {
        "id": "57a089ab954324034f92fe10",
        "classLabel": "YUCKY",
        "text": "salted peanuts",
        "weight": 12
       }]

GET  /v2/resources/topics/keywords/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "id": "57a08990954324034f92fe0e",
        "classLabel": "YUMMY",
        "text": "roasted peanuts",
        "weight": 400
       }

DELETE  /v2/resources/topics/keywords/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200

Resources: Named Entities

/v2/resources/namedentity/assertions

This end point allows you to manage the resources that are used for the Named Entity recognition services on your account. The resources support simple assertions that match specific words and phrases to guarantee crisp, unconditional Named Entity tags. Assertion-based matching takes priority over the default, generic, general-purpose Named Entity classifiers (see the /namedentity section).

POST  /v2/resources/namedentity/assertions

Attributes
NameExampleTypeDescription
texttext analyticsstring, requiredThe text (word or phrase) to match.
classLabelTECH.COOLstring, requiredThe label (tag) to apply to all Named Entities that match the specified text.
Response 201

GET  /v2/resources/namedentity/assertions

Response 200
       [{
        "id": "5784e2759543246479a8633e",
        "classLabel": "TECH.COOL",
        "text": "text analytics"
       }, {
        "id": "5784e24d9543246479a8633a",
        "classLabel": "ORG.COMPANY.EXCELLENT",
        "text": "TheySay"
       }]

GET  /v2/resources/namedentity/assertions/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "id": "5784e2759543246479a8633e",
        "classLabel": "TECH.COOL",
        "text": "text analytics"
       }

DELETE  /v2/resources/namedentity/assertions/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200

Resources: Entity Taxonomies

/v2/resources/taxonomies/entity

This end point allows you to manage the taxonomic resources that are used in the entity categorisation on your account. By adding pattern matching rules for taxonomic categories, you can categorise entity mentions into any desired taxonomic levels beyond the default head noun-based grouping.

POST  /v2/resources/taxonomies/entity

Attributes
NameExampleTypeDescription
matchPattern(price(s)?|bill(s)?|offer(s)?|expensive|rip(-| )?off)string, requiredA regex pattern for capturing entity mentions.
categoryPRICEstring, requiredThe taxonomic category under which matched entity mentions should be categorised.
Response 201

GET  /v2/resources/taxonomies/entity

Response 200
       [{
        "matchPattern": "(beer|lager|bitter)",
        "category": "FOOD.DRINK",
        "id": "51b0781f7a233d48005ecc20"
       }, {
        "matchPattern": "pizza(s)?",
        "category": "FOOD.PIZZA",
        "id": "51b0780a7a233d4e005ecc1f"
       }]

GET  /v2/resources/taxonomies/entity/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200
       {
        "matchPattern": "(beer|lager|bitter)",
        "category": "FOOD.DRINK",
        "id": "51b0781f7a233d48005ecc20"
       }

DELETE  /v2/resources/taxonomies/entity/{objectID}

Parameters
NameTypeDescription
objectIDstring, requiredUnique ID of resource.
Response 200

Account Usage

/v2/usagestats

You can monitor your API usage within a specific time period between two time stamps. The timestamps expect values that are compliant with the [W3C](http://www.w3.org/TR/NOTE-datetime) date and time format.

GET  /v2/usagestats

Parameters
NameExampleTypeDescription
from2013-02-01string, requiredThe W3C start value for the query.
to2013-02-13string, optionalThe W3C end value for the query. If omitted, defaults to now.
groupbydatestring, optionalSpecify the fields to group the results by. Possible values: date, method, path, status, ip. These can be combined as a comma separated list, e.g. groupby=date,ip. The all value can be used a shorthand. Defaults to date.
aggregatedurationstring, optionalSpecify the aggregates. Possible values count, length, duration. These can be combined as a comma separated list, e.g. aggregate=length,duration. The all value can be used a shorthand. Defaults to count.
Response 200
{ "username": "yourUserName", "from": "2013-02-06T00:00:00.000Z", "to": "2013-02-13T00:00:00.000Z", "requestCount": 193, "dailyUsage": [{ "date": "2013-02-06T00:00:00.000Z", "requestCount": 0 }, { "date": "2013-02-07T00:00:00.000Z", "requestCount": 3 }, { "date": "2013-02-08T00:00:00.000Z", "requestCount": 97 }, { "date": "2013-02-09T00:00:00.000Z", "requestCount": 0 }, { "date": "2013-02-10T00:00:00.000Z", "requestCount": 0 }, { "date": "2013-02-11T00:00:00.000Z", "requestCount": 15 }, { "date": "2013-02-12T00:00:00.000Z", "requestCount": 72 }, { "date": "2013-02-13T00:00:00.000Z", "requestCount": 6 } ] }