Skip to content

Manage project quality

Steps

1

Annotate target categories

Double-click a test file to open it in the editor and select Analyze Document or press F5 to analyze it.

Right-click the text and select Annotate Category. Select from the drop-down list one of the suggested categories for the document.

2

Annotate target extractions

Double click a test file to open it in the editor and select Analyze Document or press F5 to analyze it. Highlight a word or an expression directly from the input text, right-click it and choose Annotate Extraction.

Select from the drop-down list the proper extraction object (TEMPLATE.FIELD) for the word you have chosen.

3

Analyze a document to monitor your model quality

Move to the top-right of the screen and select Analyze Document or press F5.

Check the whole accuracy of the annotations regarding the text, by looking at the Categorization and Extraction tool windows at the bottom—see the quality of your document below the Quality column.

You can see annotated categories and extractions from the Extraction and Categorization tool windows anytime—a colored hexagon will appear on the right of all the annotations in the results below the Quality column.

In the Annotation tool window on the right of the screen you will see precision and recall values for the document according to the targets you have created.

4

Monitor your NLP model accuracy on all documents

Move to the top-right of the screen and select Analyze All Documents , then choose a name for your report.

Open the Report tool window and double-click the name of the analysis report. You can see a table with all the analyzed documents and information regarding categories and extractions. Move to the right side of the table to find the Categorization and Extraction sections that include values of precision, recall and F-measure for each document.