Skip to content

Contexts

Contexts for document analysis

Document analysis resources operate within a context.
The context determines the type of Knowledge Graph to use. A context can have more Knowledge Graphs of the same type to support as many languages.

standard context

To date, the API has only one context (named standard) using universal, all-purpose Knowledge Graphs. Other contexts may be added in the future, equipped with domain-specific Knowledge Graphs.

This is the overview of the document analysis capabilities and languages available for the standard context:

Capability English Spanish French German Italian
Deep linguistic analysis
Keyphrase extraction
Named entity recognition
Relation extraction
Sentiment analysis
Full analysis ✔* ✔* ✔* ✔*

* Doesn't include sentiment analysis.

Self-documentation resource

The API provides a self-documentation resource to discover available contexts and their features. It has this path:

contexts

Therefore, the complete URL is:

https://nlapi.expert.ai/v2/contexts

It must be requested with the GET method.
It returns the list of available contexts along with the supported languages and analyses as shown in the above table.

In the reference section of this manual you will find all the information you need to get contexts information using the API's RESTful interface, specifically:

Note

Even if you consume the API through a ready-to-use client that hides low-level requests and responses, knowing the output format helps you understand and navigate the results.

Here is an example of getting contexts information:

This example code uses expertai-nlapi, the open-source Python client corresponding to the nlapi-python GitHub project.

The client gets user credentials from two environment variables:

EAI_USERNAME
EAI_PASSWORD

Set those variables with your account credentials before running the sample program below.

The program prints the list of contexts with the language they support.

from expertai.nlapi.cloud.client import ExpertAiClient
client = ExpertAiClient()

output = client.contexts()

print("Available contexts:\n")

for context in output.contexts:
    print(context.name)
    print("\tLanguages:")
    for language in context.languages:
        print("\t\t{}".format(language.code))

This example code uses @expertai/nlapi, the open-source NodeJS client corresponding to the nlapi-nodejs GitHub project.

The client gets user credentials from two environment variables:

EAI_USERNAME
EAI_PASSWORD

Set those variables with your account credentials before running the sample program below.

The program prints the list of contexts with the language they support.

import {NLClient} from "@expertai/nlapi";

var nlClient = new NLClient();

nlClient.contexts().then((result) => {
    console.log("Available contexts:");

    for (const context of result.contexts) {
        console.log(context.name)
        console.log("\tLanguages:")
        for (const language of context.languages) {
            console.log("\t\t" + language.code);
        }
    }
})

This example code uses nlapi-java-sdk, the open-source Java client corresponding to the nlapi-java GitHub project.

The client gets user credentials from two environment variables:

EAI_USERNAME
EAI_PASSWORD

Set those variables with your account credentials before running the sample program below.

The program prints the JSON response.

import ai.expert.nlapi.security.Authentication;
import ai.expert.nlapi.security.Authenticator;
import ai.expert.nlapi.security.BasicAuthenticator;
import ai.expert.nlapi.security.DefaultCredentialsProvider;
import ai.expert.nlapi.v2.API;
import ai.expert.nlapi.v2.cloud.InfoAPI;
import ai.expert.nlapi.v2.cloud.InfoAPIConfig;
import ai.expert.nlapi.v2.message.ContextsResponse;

public class Main {

    public static Authentication createAuthentication() throws Exception {
        DefaultCredentialsProvider credentialsProvider = new DefaultCredentialsProvider();
        Authenticator authenticator = new BasicAuthenticator(credentialsProvider);
        return new Authentication(authenticator);
    }

    public static void main(String[] args) {
        try {
            InfoAPI infoAPI = new InfoAPI(InfoAPIConfig.builder()
                                                       .withAuthentication(createAuthentication())
                                                       .withVersion(API.Versions.V2)
                                                       .build());

            ContextsResponse contexts = infoAPI.getContexts();
            contexts.prettyPrint();
        }
        catch(Exception ex) {
            ex.printStackTrace();
        }
    }
}

The following curl command gets the contexts documentation resource of the API's REST interface.
Run the command from a shell after replacing token with the actual authorization token.

curl -X GET https://nlapi.expert.ai/v2/contexts \
    -H 'Authorization: Bearer token'

The server returns a JSON object.

The following curl command gets the contexts documentation resource of the API's REST interface.
Open a command prompt in the folder where you installed curl and run the command after replacing token with the actual authorization token.

curl -X GET https://nlapi.expert.ai/v2/contexts -H "Authorization: Bearer token"

The server returns a JSON object.

The Knowledge Graph

The expert.ai Knowledge Graph is a concept-based representation of universal or domain-specific knowledge for a given language.

Each entry in the Knowledge Graph corresponds to a concept.

There are entries for common nouns, proper nouns, verbs, adjectives and adverbs.

Each entry contains information, for example:

  • The terms that can be used to express the concept in a text, for example hand, pass, pass on, hand off, turn over, reach.
  • the corresponding part-of-speech—for example (to) climb → verb—and other grammatical information on the terms.
  • The topics to which the concept corresponds, for example soprano → opera, singing.
  • References to external knowledge bases such as Wikidata, DBpedia, GeoNames, etc.
  • Extended proprieties, for example the coordinates of places.

Modeling concepts that can be expressed in a language are not sufficient to enable the text analysis software to interpret ambiguous terms alone.

For example, consider that in the expert.ai universal Knowledge Graph for the English language there are more than 20 entries for the verb (to) put.
The single entry has statistical information indicating the frequency with which the concept is used in a reference corpus compared to other concepts that can be expressed with the same word. This information is useful for disambiguation, but insufficient. Using statistics alone can lead to incorrect interpretations and to a textual analysis of low quality and even lower usefulness.

What really improves the results are the relationships between concepts, hence the term Knowledge Graph. A single entry is linked to one or more other entries and, as such, relationships can be numerous.
For example, a concept can be connected to other concepts in the hierarchical relationship called "IS-A". So:

sodium
IS A
alkaline metal
IS A
metal
IS A
element

Or there can be a "part-whole" relationship:

wheel
IS A PART OF
car

clutch
IS A PART OF
car

dashboard
IS A PART OF
car

Relationships are designed to be navigated in both directions, so from the concept of car it is possible to discover the parts that make it up (wheels, clutch, dashboard, etc.) by navigating the "IS PART OF" relationships downstream. In the same way, for the "IS-A" relationship, starting from the concept of alkaline metal it is possible to discover which elements are "types of" the parent concept (sodium, cesium, lithium, etc.).

Relationships can be one-to-many. If this is obvious for the "part-whole" relations if read from the "whole" to the parts and for the "IS A" relationship if read from the more generic concept to the more specific ones, it is not obvious in the opposite direction. However it can be, for example:

cat
IS A
feline

but also:

cat
IS A
pet

So it is possible that in a hierarchical relationship a concept can have multiple "parents".

The relationships between Knowledge Graph entries are the foundations of solid disambiguation.
Suppose the text contains a form of the verb (to) put. As stated, the standard English Knowledge Graph contains more than 20 different concepts that can be expressed with (to) put, but which is the right one?

Relationships can help. The text analysis software can explore the relationships of each concept to find out if the concept itself is linked to other concepts expressed in the same text. The concept with more links to other concepts is a good candidate for the "right concept".

The disambiguation of one word helps to disambiguate the others, but the text analysis software is always free to "go back" and correct its previous clarification choices as it proceeds with the analysis of the other words of the text, with a chain effect on other disambiguations.

The name used by expert.ai to designate an entry in a Knowledge Graph is syncon.