A processing workflow is a data processing pipeline. How does the data reach the data processing pipeline?
There are several approaches to provide data to the data processing pipeline:
- A local file containing the inputs as references to data or values (one per line)
- A list of comma-separated values containing the inputs as references to data or values
- A reference to a catalogue
This approach is very useful during the early stages of the application development where you download a few files to your sandbox and use these data to run the first tests.
We do not recommend using this method beyond the early stages of the application development.
This approach is very straight-forward to test the application against a small number of values. On the other hand, during the exploitation phase with an application exposed as an OGC WPS service, the comma-separated values list is very well managed and allows clients to provide several values to the application.
Reference to a catalogue¶
This approach allows tapping on large repositories of Earth Observation data exposing an OpenSearch catalogue (see Catalog) It is the preferred option to process large datasets and to expose catalogue queriable in the application Web Processing Service interface.
For details on how to implement this, read Application descriptor reference