SMURF contains a small number of commands specifically directed towards the reduction of ACSIS data. This section contains an introduction to each of these commands. The reduction of ACSIS data is described more fully in The Heterodyne Data Reduction Cookbook (SC/20).
The central step in the reduction of ACSIS data is the conversion of one or more
time-series cubes with [spectral channel number, receptor index, time-slice
index]
axes, into a spatial cube with [RA, Dec, spectral channel number]
axes3.
This step is accomplished using the makecube command.
This command first defines the extent (in both pixels and world co-ordinates) of the output cube, initialises the output cube to hold zero at every pixel, and then goes through each receptor sample value in the input data. For each such sample, it first works out its position on the sky and its spectral position, using the meta-data in the input files. It then determines the pixel in the output cube that has the same spatial and spectral position, and then “pastes” the sample value into the output cube at this central pixel position. This pasting involves dividing the input sample value up between the output pixels in the neighbourhood of the central output pixel, and then incrementing each of these output pixel values by its assigned share of the input sample value. Various schemes can be used to determine the share assigned to each output pixel - the simplest and fastest is “nearest neighbour” in which the whole input sample value is assigned to the central output pixel. Whilst being fast, this scheme can produce small geometric shifts in feature positions. Other available schemes include a bi-linear scheme that spreads each input value out linearly between the four closest output pixels, Gaussian weighting with a given radius, Sinc weighting, etc.
There are many other aspects of the operation of makecube that can be controlled via various program parameters, some the more important of which are:
The makecube commands turns time-series cubes into spatial cubes. Each plane in a time-series cube contains a set of spectra - one for each used receptor - observed simultaneously at a time given by the position of the plane on the third axis. It should be noted that the times associated with this third axis are not guaranteed to increase monotonically, although that will often be the case. For various reasons, the writing out of some times slices to the time-series cube may be delayed, resulting in backwards steps in time along the third axis. This in itself is not a problem for makecube which makes no assumptions about the order in which time-slices are stored in the time-series cube. However, other commands may fail - particularly commands that need to locate the time-slice index associated with a given time. Such commands assume that time is a monotonic function of time-slice index, and that therefore the equivalent of a simple binary chop can be used to locate the index associated with a given time. These commands will fail with a message indicating that “no inverse transformation can be found” if the time-series cube contains any out-of-order time slices. Operations that may produce such a failure include attempting to use Kappa wcsalign to align two time-series cubes, or displaying a 2D slice of a time-series cube using Kappa display. In these cases, the time-series cube can be “rectified” using timesort. This produces a copy of the cube in which the time slices are in monotonic order.
In addition, timesort can also be used to assemble and re-order the time slices from multiple input cubes relating to the same observation, and then write them out again into a set of time-series cubes, each of a specified size. In this situation, if a single time slice is split across two input cubes (which can happen - spectra from some detectors being recorded in one file and those from the other detectors in another file), they will be merged together into a single plane in one of the output cubes.
The process of converting one or more time-series cubes into a spatial cube will usually produce some degree of smoothing in the output data due to multiple input samples contributing to each pixel in the output cube. This smoothing can sometimes be useful in terms of reducing systematic differences between the input time-series cubes, such as differing base-lines. In these cases it is informative to be able to quantify the effect of this smoothing. The unmakecube command provides this facility - it creates a set of “artificial” time-series cubes from a given spatial cube. In detail, for each sample position in a given set of input time-series cubes, a given spatial cube is interpolated to create a new spectrum, which is then stored at the corresponding position in an output time-series cube. Thus, any smoothing present in the spatial cube is transferred into the output time-series cube.
A typical scheme for using unmakecube is:
The fit1d command fits profiles along a particular axis of a NDF data file. It is a generic command that will work on hypercubes with up to 7 dimensions, but is here discussed in terms of a typical 3-D ACSIS data-file with axes RA, Dec, and LSR Velocity. Specifically, the fitting of spectra across the imaged section of the sky. The output of fit1d is a cube with the fitted profiles and “Component parameter files as NDF extensions”. Be aware that the input cube is expected to be baseline subtracted and to have a zero level of 0.
What sets fit1d apart from most other fitting routines is that, by using “Component parameter files” as inputs, it gives individual control over the fitting at each position. For instance, it is possible to fit broad lines on the nucleus of a galaxy but narrow lines everywhere else in the disk. Or to fit multiple components in an outflow and single components everywhere else in the field. Still, these types of fits may require a considerable familiarity with handling, cutting, and pasting NDF files in order to “create” the desired parameter files for input.
fit1d can also fit more complicated shapes than Gaussians. In particular, Gauss-Hermite functions are a powerful extension when fitting profiles that are skewed, peaky, or only approximately Gaussian. Figure 2 shows Gauss-Hermite profiles as a function of the “skewness” coefficient h3 and the “peakiness” coefficient h4. The red box indicates the limits on acceptable values for h3 and h4 as defined in the defaults config file. The limits were chosen such as to exclude fits that look more like multiple components rather than a distorted single Gaussian, but, admittedly are fairly arbitrary.
Because of the ability to fit distorted shapes, Gauss-Hermites are particularly well suited to “capture” the maximum amount of emission from a cube. Figure 3 shows an example of the quality of the fits that can be obtained. For the shown case fit1d used a 3-component gauss-hermite2 (fitting h3 and h4) function with the range around the profiles and the remaining configuration parameters at their default setting. Collapsing the cube with the fitted profiles can thus result in an accurate and almost noise-free white-light or total-emission map. Residuals from the fit can of course be studied by subtracting the fitted profiles from the original cube.
fit1d does a one-dimensional fit along each “profile” (spectrum), fitting the number of requested “components” concurrently. Function shapes that can be fitted are “gaussians”, “gausshermite1”, “gausshermite2”, and “voigt” functions, which are discussed in detail in:
Gauss-Hermite profiles are easiest visualised as the combination of a Gaussian and decaying asymmetric 3rd-order and/or symmetric 4th-order polynomials. The 3rd-order polynomial causes a positive bump on one side and a negative bump on the other side of the main Gaussian, resulting in asymmetric wings and a skewed shape. By contrast the 4th-order polynomial causes a bump in the centre and steeper slopes i.e. a peaky shape.
The Gauss-Hermite profiles in fit1d are called gausshermite1, fitting only h3; and gausshermite2,
fitting both h3 and h4 (to fit only h4, use gausshermite2 and define h3 to be 0 and fixed). The default in
the configuration file for FUNCTION
is a gausshermite2.
To emphasise a number of issues:
ESTIMATE_ONLY
to 1 will skip the fit and produce
an output file with profiles based on the initial estimates, allowing the user to inspect
those. The associated Component parameter files could be modified and used as initial
estimates for a subsequent fit (see next section).
POS_ONLY
.
USERVAL
“User parameter values file” to accomplish this. It not possible to mix in
Voigt profiles.
SORT
and SORT_ESTIMATE
may help in minimising problems (see next point).
RANGE
.
The resulting Component parameter files can then be used to generate a combined profile
using fit1d with PARCOMP
and MODEL_ONLY
. This is preferred over simply co-adding the
output files with the fitted profiles since it will put all relevant parameters files in the
header of the output file.Besides a cube with the fitted profiles fit1d also outputs so-called “Component parameter files”. These are added as NDF extensions in the header of the output file. They can be accessed there directly but also copied out as independent files:
There is a parameter file for each component (“line”) that was fitted along the profile
up to the number of components requested by the user. These are labelled COMP_1
COMP_
.
COMP_0
is a special diagnostics component that lists the number of components fitted at each position
and a return code from the fitting routine.
The Component parameter files have 7 images along the axis that was fitted, with each representing a fitted parameter:
Thus, for Gaussian fits OUTFILE.MORE.SMURF_FIT1D.COMP_1(,,1)
is an image of the fitted amplitudes
of Component 1 of each profile, while OUTFILE.MORE.SMURF_FIT1D.COMP_1(,,3)
shows the fitted
FWHMs across the field-of-view.
Much of the (anticipated) use of fit1d derives from the fact that Component parameter files can be used as input as well: either to provide initial estimates or fixed values to the fitting routine. The hierarchy is as follows:
USERVAL
) are then substituted.
The difference between values specified in the component parameter files and ones declared in the “User parameter values file” is that the former can vary across the field-of-view whereas the latter will result in the same value being used for all profiles.
In principle a Component parameter file can be created from “scratch”:
Obviously, this is not a very practical example given that the profile is the same across the field: setting values in the user definition file for component 1 would achieve the same. However, with dedicated software or a tediously long script running chpix for every position a sophisticated model could, in principle, be created. For instance if a feature shifts in velocity across the field, it could be isolated and fitted by supplying a parameter file with a shifting initial estimate for the position of the peak.
Another situation where manipulation of parameter files can be useful is when parameter files from previous fits require corrections. For instance, in case it is possible to identify the troublesome locations by thresholding with Kappa thresh, replacing those with bad values; and Kappa fillbad can be used to interpolate from surrounding values.
Figure 4 shows an example. On the left is the a section of the Amplitude plane of a parameter file resulting from a 1-component fit of a Gaussian. In a few positions problems can be seen (actually due to a secondary component). These points were isolated using thresh on the Position plane of the parameters and be interpolated over using fillbad. The “fixed” parameter file was then used to provide initial estimates for a subsequent file, resulting in the fitted Amplitudes on the right.
These fairly simple examples are presented to illustrate the use of “Component parameter files”, but more elaborate schemes are possible. For instance, a section of a parameter file that fits broad lines in the nucleus of a galaxy can be pasted into a parameter file with narrow fits in the remainder of the disk. We’ll leave further examples to to the imagination of the reader.
3The RA and Dec axes may be replaced by axes appropriate for other celestial co-ordinate systems.