Removes regions of bad values from an NDF
It forms a smooth replacement function for the regions of bad values by forming successive approximations to a solution of Laplace’s equation, with the surrounding valid data providing the boundary conditions.
TRUE
. This must be at least 256. [512]
FALSE
, the whole array is processed at the same time. If it
is TRUE
, the array is divided into chunks whose maximum dimension along an
axis is given by Parameter BLOCK. [FALSE]
[2]
0
means no fitting across a dimension. For instance, [0,0,5]
would be appropriate if the spectra along the third dimension of a cube are
independent, and the replacement values are to be derived only within each
spectrum.
For maximum efficiency, a scale length should normally have a value about half the
‘size’ of the largest invalid region to be replaced. (See “Notes” section for more
details.) [5.0]
!
)
value means using the title of the input NDF. [!]
TRUE
, variance information is to be propagated; any bad values
therein are filled. Also the variance is used to weight the calculation of
the replacement data values. If VARIANCE is FALSE
, there will be no variance
processing thus requiring two less arrays in memory. This parameter is only
accessed if the input NDF contains a VARIANCE component. [TRUE]
"Cleaned image"
instead of the title of NDF aa. The algorithm is based on the relaxation method of repeatedly replacing each bad pixel with the mean of its two nearest neighbours along each pixel axis. Such a method converges to the required solution, but information about the good regions only propagates at a rate of about one pixel per iteration into the bad regions, resulting in slow convergence if large areas are to be filled.
This application speeds convergence to an acceptable function by forming the replacement mean from all the pixels in the same axis (such as row or a column), using a weight which decreases exponentially with distance and goes to zero after the first good pixel is encountered in any direction. If there is variance information, this is included in the weighting so as to give more weight to surrounding values with lower variance. The scale length of the exponential weight is initially set large, to allow rapid propagation of an approximate ‘smooth’ solution into the bad regions–-an initially acceptable solution is thus rapidly obtained (often in the first one or two iterations). The scale length is subsequently reduced by a factor of 2 whenever the maximum absolute change occurring in an iteration has decreased by a factor of 4 since the current scale length was first used. In this way, later iterations introduce progressively finer detail into the solution. Since this fine detail occurs predominantly close to the ‘crinkly’ edges of the bad regions, the slower propagation of the solution in the later iterations is then less important.
When there is variance processing the output variance is reassigned if either the input variance or data value was bad. Where the input value is good but its associated variance is bad, the calculation proceeds as if the data value were bad, except that only the variance is substituted in the output. The new variance is approximated as twice the inverse of the sum of the weights.
The price of the above efficiency means that considerable workspace is required, typically two or three times the size of the input image, but even larger for the one and two-byte integer types. If memory is at a premium, there is an option to process in blocks (cf. Parameter MEMORY). However, this may not give as good results as processing the array in full, especially when the bad-pixel regions span blocks.
The value of the Parameter SIZE is not critical and the default value will normally prove effective. It primarily affects the efficiency of the algorithm on various size scales. If the smoothing scale is set to a large value, large scale variations in the replacement function are rapidly found, while smaller scale variations may require many iterations. Conversely, a small value will rapidly produce the small scale variations but not the larger scale ones. The aim is to select an initial value SIZE such that during the course of a few iterations, the range of size scales in the replacement function are all used. In practice this means that the value of SIZE should be about half the size of the largest scale variations expected. Unless the valid pixels are very sparse, this is usually determined by the ‘size’ of the largest invalid region to be replaced.
An error results if the input NDF has no bad values to replace.
The progress of the iterations is reported at the normal reporting level. The format of the output is slightly different if the scale lengths vary with pixel axis; an extra axis column is included.
This routine correctly processes the AXIS, DATA, QUALITY, VARIANCE, LABEL, TITLE, UNITS, WCS, and HISTORY components of an NDF data structure and propagates all extensions.
Processing of bad pixels and automatic quality masking are supported. The output bad-pixel flag is set to indicate no bad values in the data and variance arrays.
All non-complex numeric data types can be handled. Arithmetic is performed using single- or double-precision floating point as appropriate.