Skip to content

Manage project quality

Steps

1

Annotate target categories

Double click a test file to open it in the editor. Right-click on the text and choose Annotate Category.

Select from the drop-down list one of the suggested categories for the document.

2

Annotate target extractions

Double click a test file to open it in the editor. Highlight a word or an expression directly from the input text, right-click it and choose Annotate Extraction.

Select from the drop-down list the proper extraction object (`TEMPLATE`/`FIELD`) for the word you have chosen.

3

Analyze a document to monitor your model quality

Move to the top-right of the screen and select Analyze Document (F5).

Have a look at the Annotation tool window on the right of the screen: you will see precision and recall values for the document according to the targets you have created.

Check the whole accuracy of the annotations regarding the text, by looking at the Categorization and Extraction tool windows on the bottom—see the quality of your document under the last column of the tool window.

You can see annotated categories and extractions from the Extraction and Categorization tool windows anytime—a colored shape will appear on the right of all the annotations in the results under the column Quality.

4

Analyze the whole training set to measure your NLP model accuracy on all documents.

Move to the top-right of the screen and select Analyze All Document, then choose a name for your report.

Open the Report tool window and click the name of the analysis report. In the bottom side of the tool window you can see a table with all the analyzed documents and information regarding categories and extractions. Move to the right side of the table to find the Categorization and Extraction sections that include values of precision, recall, accuracy and F-measure for each document.