Annotating documents in extraction projects, unlike in categorization projects where you annotate entire documents with the expected categories, means marking portions of documents' text as expected extractions.
During experiments, annotations in training library documents allow the model to "learn" how to predict similar extractions. In test library documents annotations allow to compute metrics to determine model quality.
Annotations should be created for all the information classes before running experiments, based on the principle: "no annotations, no extractions".
Annotations for a given library of documents are managed in the Documents tab of the project dashboard.
You can manage annotations:
- In the detail view and its variants.
- In the context view.
- With Active learning.
Annotation features are disabled for documents in a language other than the project language.