Skip to content

Manage input mapping

Some components of a workflow expect to receive JSON input with a specific format and do not work if they receive an unrecognized input. In the reference section of this manual you will find the description of the JSON keys that components' blocks recognize in their input.

The first block of a data flow has as its default input the entire workflow input, that is, the JSON payload of the analysis request transmitted to the workflow via its API or with the interactive test interface.
Such input must contain a valid combination of keys among those that the block expects.
The presence of other keys in the input can be tolerated (excess keys are ignored) or not, depending on the type of component and the keys themselves. For example, it is not tolerated if a block expects one of two alternative combinations of keys and both are present in the input (ambiguous input).

Any other block in a flow except the first, depending on its type, may allow and need input mapping.

Input mapping consists of indicating, by editing the properties of the block, which keys present in the JSON produced by upstream blocks must be "read" by the block. The keys can be among those produced by the previous block in the flow, but not necessarily: they can also be keys from other upstream blocks, including the first block. This means that the JSON received from a block other than the first can be a collage constructed on the fly by NL Flow immediately before executing the block, drawing on different sources upstream of the block itself.

Info

Mapped keys must be of the same type: an object must be mapped to an object, an array to an array, a string to a string, etc.

In some cases the NL Flow editor tries to "automagically" map input properties when the block receives is connected to a previous block. The editor then searches for reasonable matches between the output keys of the block from which the connection originates and the input keys expected by the block that receives the connection. If matches are found, the block's input properties are set automatically.

If the "automagical" mapping does not happen, you can try the following:

  • Edit the block.
  • Go to the Input tab.
  • Select Map properties automatically on the right in the tabs' strip.

If this fails too, if automatic mapping is not available for the block or as a general alternative to automatic mapping, you can (or must) map manually:

  • For a [Map block]:

    • Edit the block and define mappings.
  • For other blocks requiring mapping:

    • Edit the block.
    • Go to the Input tab.
    • For each input key to map, choose the input key from the drop-down list.

By default, the workflow input is inaccessible to blocks downstream of the first in each data flow, however, if the input format is explicitly defined and made known to the editor, the workflow input keys can be mapped to the input properties of any block, including the first. This is both to allow blocks placed "deep" in the flow to access the workflow input and to solve particular situations such as:

  • The workflow has two flows.
  • The first block of the first flow expects key A in the input and does not tolerate the presence of other keys in the JSON.
  • The first block of the second flow expects key B in the input, but tolerates the presence of other keys in the JSON.
  • The workflow input contains both A and B, but, "because" of the first block of the first flow, which does not tolerate the presence of B, the workflow throws an exception when used.

By mapping a specific input property of the first block of the first flow to key A of the workflow input, the problem is solved.

Mapping to the keys of the workflow input is done as above, the keys of the workflow's input have names starting with $nlflow_input.