“Error” is a woolly concept; it means different things to different people, and is generally intimately tied to specialist data or applications. Even where some mathematical description is adopted, with rigid rules describing the effects of different operations on the error values, there is still no way of protecting the user against invalid processing sequences and, consequently, the generation of incorrect and misleading error estimates. However, there have been repeated and emphatic demands for there to be some provision for error handling, so that (at the very least) error bars can appear on plots.
The compromise adopted in Starlink data structures is to allow normal statistics to be assumed and to provide for variances to be stored along with the data. User-defined structures may employ different representations of error information.
On input to an application assume that the elements of the [DATA_ARRAY] data object are independent and subject to normal statistics, and that the contents of the [VARIANCE] data object are the variances of the corresponding elements of [DATA_ARRAY]. For most data, however, this will not be true, and therefore the variance should be taken merely as a guide. Ultimately, it will be entirely the responsibility of the user to ensure that the result is sensible. For example, if two copies of the same data are added together this will not be detected by the application, and the variances will be wrong (by a factor of two).
The [VARIANCE] data object will be propagated in cases where the application can readily compute its processed values. For example, it is relatively easy to define the effect on the variances of simple scalar and vector arithmetic operations, and so variances will be computed and included in output structures; however more complicated operations, including convolution, are not so amenable, and variances will not be computed. The programmer should state for each program whether [VARIANCE] is processed or not, and the limitations of the variance computation.