Converts a set of analysed intensity images into a cube holding Stokes vectors
Either dual or single beam data can be processed, with an appropriate algorithm being used in each case. There is also an option to process dual-beam data using the single beam algorithm (see parameter DUALBEAM).
All the input images should be aligned pixel-for-pixel, and should have had the sky background removed. O and E ray images in dual-beam data should have been extracted into separate images. If the input arrays are 3D, the first two pixels axes must be the spatial axes, and all planes within each input cube must be spatially aligned. Corresponding 2D planes within 3D input cubes are processed independently of each other, using the same algorithm used for 2D input images.
The final axis in the output array corresponds to Stokes parameter and has bounds 1:3 (this will be axis 3 if the inputs are 2D or axis 4 if the inputs are 3D). Axis values 1, 2 and 3 correspond to I, Q and U respectively. Currently, circular polarimetry can only be performed by the dual-beam algorithm, in which case the final axis of the output has bounds 1:2, corresponding to I and V values (see parameter PMODE).
"
E"
or "
O"
). Otherwise, the single-beam algorithm is used. It is possible to
process dual-beam data using the single-beam algorithm, so DUALBEAM may be set to
FALSE even if the input images hold dual-beam data. However, the opposite is not true; the
application will abort if DUALBEAM is set TRUE and any of the input images contain
single-beam data. [!] "
Group Expressions"
. Read access only is required unless the SETVAR
parameter is assigned a TRUE value. In dual-beam mode, it specifies the maximum number of iterations to be used when inter-comparing pairs of input images to determine their relative scale-factor and/or zero-point. If the specified number of iterations is exceeded without achieving the accuracy required by the settings of the TOLS and TOLZ parameters, then a warning message will be issued, but the results will still be used. The value given for MAXIT must be at least one. The runtime default value is 30.
In single-beam mode, it specifies the maximum number of iterations to be used when estimating input variances or rejecting aberrant input values. The default value depends on the value supplied for parameter WEIGHTS. If WEIGHTS indicates that estimates of the input variances are to be made (i.e. if WEIGHTS has the value 2 or 3), then the default value is 8. Otherwise, if variances in the input NDFs are to be used (i.e. if WEIGHTS is 1), the default is zero. This is because each iteration is a computationally expensive process, and so iterations should only be performed if they are really necessary. MAXIT is always fixed at zero if WEIGHTS is 4 (i.e. if all input data are given constant weight). See also parameter TOLR. []
"
WPLATE"
are 0.0 and 45.0. Currently, the single-beam algorithm can only
perform linear polarimetry. [LINEAR] "
sky noise
suppression factor"
used to control the effects of sky noise when pairs of input images are
inter-compared to determine their relative scale-factor. It is intended to prevent the resulting
scale-factor estimate being biased by the many similar values present in the "
sky background"
of
typical astronomical data. SKYSUP controls an algorithm which reduces the weight given to
data where there is a high density of points with the same value, in order to suppress this
effect.
A SKYSUP value of unity can often be effective, but a value set by the approximate ratio of sky pixels
to useful object pixels (i.e. those containing non-sky signal) in a "
typical"
image will usually be better.
The precise value is not critical. A value of zero disables the sky noise suppression algorithm
completely. The default value for SKYSUP is 10. This is normally reasonable for CCD frames of
extended objects such as galaxies, but a larger value, say 100, may give slightly better results for
star fields. [10]
The error of a given input intensity value can be estimated in two ways, by its deviation from the sine curve connecting analysed intensity and analyser position, or from its deviation from its local neighbours. The second method requires a spatial smoothing to be performed, the size of which is specified by SMBOX. However, spatial smoothing can introduce problems because it can cause spatial structure in the image to be interpreted as noise, resulting in over-estimates of the input variances. For this reason it is best to use a small smoothing size. If you have data for many analyser positions (say 8 or more) you could even set SMBOX to zero in order to prevent any spatial smoothing being performed. In this case, the errors are based purely on deviations from the expected sine curves. If you do not have this many analyser positions, you should use some spatial smoothing. For instance if you only had data for three analyser positions (the minimum possible number), then the sine curves would fit the supplied data exactly, no matter what the noise may be, and would consequently give no information about the input variances. In this case, a larger value of SMBOX (say 9) may be necessary. [3]
1 – Use the variances supplied with the input images. The reciprocal of these variances are used as weights. If any input images do not have associated variances then a constant weight of 1.0 will be used for all input images.
2 – Use the variances supplied with the input images. If any input images do not have associated variances then estimates of the variances are made for all input images based on the spread of data values. The reciprocal of these variances are used as weights. An error will be reported if parameter MAXIT is set to zero, thus preventing the iterative estimation of input variances.
3 – Use estimates of the variances for all input images based on the spread of data values. The reciprocal of these variances are used as weights. Any variances supplied with the input images are ignored. An error will be reported if parameter MAXIT is set to zero, thus preventing the iterative estimation of input variances.
4 – Use a constant weight of 1.0 for all input images. Any variances supplied with the input images are ignored. The iterative rejection of bad input values controlled by parameter MAXIT and TOLR is not available in this mode.
The dual-beam algorithm always uses scheme 1. [1]
"
∗_O,∗_E"
stokes "
_O"
or "
_E"
, and stores the corresponding I, Q and U values in the 3d cube
"
stokes"
. These images contain dual-beam data (indicated by the presence of the RAY
item in the POLPACK extension), and so the dual-beam algorithm is used. "
∗_O,∗_E"
stokes nodualbeam An item named STOKES is added to the POLPACK extension. It is a character string identifying the
quantity stored in each plane of the cube. For linear polarimetry, it is set to "
IQU"
, and for circular
polarimetry it is set to "
IV"
.
The reference direction for the Stokes vectors and polarization vectors in the output NDF will be north if the first input NDF has a celestial co-ordinate Frame within its WCS information. Otherwise, the reference direction will be the second pixel axis. The POLANAL Frame in the WCS component of the output NDF is updated to describe the new reference direction. Angles are always measured positive in the same sense as rotation from the first image axis (X) to the second image axis (Y) (this will be equivalent to rotation from north through east if the image has conventional WCS information).
WCS and AXIS components are propagated from the first supplied input image to the output cube.
Single-beam mode can take account of imperfections in the analyser. The transmission (i.e. the overall throughput) and efficiency (i.e. the ability to reject light polarized across the axis) of the analyser are read from the POLPACK extension. If not found, values of 1.0 are used for both. These values are appropriate for a perfect analyser. A perfectly bad analyser (a piece of high quality glass for instance) would have a transmission of 2.0 and an efficiency of zero. The extension items named T and EPS hold the transmission and efficiency.
Single-beam mode can handle data taken by polarimeters containing a number of fixed analysers, or a single rotating analyser, in addition to the normal combination of fixed analyser and rotating half-wave plate. The POLPACK extension in each input NDF should contain either a WPLATE value (giving the angle between a fixed analyser and the half-wave plate), or an ANLANG value (giving the angle between the rotating or fixed analyser and the polarimeter reference direction). Only one of these two extension items should be present. The WPLATE and ANLANG items are free to take any value (i.e. they are not restricted to the values 0.0, 22.5, 45.0 and 67.5 degrees as in the dual-beam algorithm).
If the input intensity NDFs do not contain usable variances, then estimates of these variances can be made from the spread of supplied data values. This is an expensive iterative process (see parameters TOLR and WEIGHTS). Initially, Stokes vectors are estimated by assigning a uniform constant weight to all input data values. The I, Q and U arrays are then smoothed spatially by fitting a quadratic surface to the data within a box centred on each pixel (the size of the box can be specified using parameter SMBOX
no smoothing is applied along any frequency axis). For each input NDF, a similar array is then created holding the squared residual between the input intensity values, and the intensity values implied by the smoothed Stokes vectors (using the known analyser positions, etc, for the NDFs). These arrays containing squared residuals are then stacked together to obtain a single array holding the mean squared residual at each pixel. This array is then smoothed spatially using a mean box filter of size specified by parameter SMBOX. The values in the resulting array are used as the variance estimates for the input data (note, the same variance estimates are used for all the input NDFs).
If the input NDFs have differing zero points, the residuals for any one NDF will usually not be centred on zero, but will have some non-zero mean determined by the zero point for the NDF. If uncorrected this would led to an artificially high estimate of the input variance. There is an option (see parameter DEZERO) to reduce this effect, by using the mean of the residuals as a zero point correction for the input array. Note however, that selecting this option will usually result in significantly longer run times, and may require larger values for parameter MAXIT.
In order to provide some safe-guards against aberrant values in the input NDFs, input data values which are badly inconsistent with the smoothed Stokes vectors are rejected at this point. Pixels are rejected if the corresponding squared residual is greater than NSIGMA times the standard deviation expected for the pixel value. This standard deviation may be derived from the variances in the input data file, or from the variances estimated from the spread of data values. If many input values are rejected at a particular pixel it may indicate an entirely corrupt pixel, in which case you may want to ignore the pixel altogether by setting it bad in the output cube. This can be done using parameter MINFRAC.
After rejecting such values, the algorithm goes on to re-calculate the Stokes vectors excluding the rejected input values, weighting the input data values as specified by parameter WEIGHTS. This process is repeated a number of times given by parameters TOLR and MAXIT.
There are a couple of restrictions in single-beam mode. Firstly, there is currently no facility for measuring circular polarization. Secondly, no attempt is made to correct for difference in the exposure times of the input NDFs, which should all have been normalised to the same exposure time.
Only polarimeters with a fixed analyser and rotating half-wave plate are supported, and the half-wave plate angle (given by item WPLATE in the POLPACK extension of each input image) must be one of 0, 22.5, 45 and 67.5 degrees. Other values cause the application to abort.
If input images for all four half-wave plate positions are provided (0, 22.5, 45 and 67.5
degrees) then a correction is made for any difference in sensitivity of the two channels
(such as can be caused by the use of polarized flat-field for instance). This correction is
known as the "
F-factor"
and is based on the redundancy provided by using four half-wave
plate positions. If images with the required half-wave plate positions are not provided,
then it is assumed that the two channels are equally sensitive (that is, an F-factor of 1.0 is
used).
Corrections are also made for any difference in exposure times for the supplied intensity images. These are based on the fact that the sum of the O and E ray intensities should always equal the total intensity (after any F-factor correction), and should therefore be the same for all pairs of corresponding O and E ray images if they have equal exposure times.
The E and F factors are calculated by inter-comparing pairs of intensity images to estimate their relative scale factor and zero point. This estimation is an iterative process, and is controlled by parameters TOLS, TOLZ, SKYSUP and MAXIT.