Multiple observations form the backbone of many astronomical programmes. Determining the registration (inter-dataset transformations) of the observations is a necessary step when preparing to inter-compare or combine the data. Inter-comparison is used when performing multiple waveband observations; combination when measurements beyond the capabilities of the detector are required. Helping astronomers to determine the registration of imaging data and subsequently to transform positions or resample and combine the data is the purpose of this part of CCDPACK.
A number of registration techniques are provided, which can identify the relative positioning of frames by examining image features (centroidable objects), or by making use of prior information about the observational geometry, or both.
If it is the intention to combine datasets into one (to increase signal to noise levels, or to increase the effective area or dynamic range of the detector) the registering transforms may be used to resample the datasets so they are aligned (i.e. have pixel-to-pixel correspondence). If atmospheric transparency, sky brightness or exposure times have varied between the datasets, they need to go though a process of ‘normalisation’, in which global zero points and scale factors are determined. After these stages the data may be combined to produce a mosaic. Data combination usually makes use of a robust estimator to protect against spurious values, cosmic rays etc.
Version 3 of CCDPACK, which was released in mid-2000, handles the registration process differently from previous versions — instead of using TRANSFORM structures it now normally makes use of World Coordinate System (WCS) components of the images. For much of the time, and in particular to do the things which CCDPACK used to do, it is not necessary to be aware of this, but it opens the way to some new functionality. Discussion specific to the old methods can be found in appendix §C.
Registering a set of images requires identifying a single coordinate system which can apply to all of them, so that each pixel in each of the images has a definite position in the overall picture as well as within its own grid. This single coordinate system within which all the images can be embedded may be an actual sky coordinate system with RA and Dec coordinates, or an arbitrary one which coincides with the pixel coordinates of one of the images, or some other kind. Information on how to embed the images in the same coordinate system may come from one or more of a variety of sources, for instance:
For a given set of data, some of these sources may be more accurate or reliable than others.
It is possible to attach any number of these coordinate systems to an image. Once all the images that you are interested in have the same coordinate system attached to them, then they can all be resampled so they match pixel for pixel, ready for combination or comparison. There are two ways of keeping track of which coordinate systems are attached to an image: firstly, each coordinate system has a label (sometimes called its domain) which can be used to identify it. Some of these labels have special meaning, for instance a coordinate system labelled ‘SKY’ indicates positions on the sky itself and coordinates are usually reported in RA and Dec (the other special labels are GRID, PIXEL and AXIS). Secondly, an image always has a Current coordinate system, which is the one in which positions are normally reported. These attached coordinate systems are understood by other Starlink applications so that for instance using KAPPA’s DISPLAY command:
myimage with axes showing coordinates from the Current system attached to the
CCDPACK provides four main categories of facility for adding coordinate systems to sets of images, which are described later in this section as follows:
Resampling and mosaicing registered images is discussed in sections §8.7, 8.8 and 8.9, and some examples of putting it all together are given in §8.10.
The discussion elsewhere in this document tells you all that you need to know for using coordinate systems attached to image files in order to register them within CCDPACK. In fact for simple cases in which images are to be registered using, for instance, only object matching methods, it is not necessary to understand how the coordinate systems are handled by the registration programs. However, you can get a better understanding of coordinate systems and what you can do with them (including using them for image display) in the “Using World Coordinate Systems” section of the KAPPA document, SUN/95, and more detailed description of the underlying AST system in SUN/210.
In general CCDPACK conforms to the normal rules about World Coordinate System (WCS) image components and AST objects so other applications, for instance KAPPA’s WCSTRAN and friends, can freely be mixed with CCDPACK applications. In fact some of the KAPPA applications provide similar facilities to those of the CCDPACK ones, and they can be used instead if for some reason you prefer them. There are a couple of additional conventions used within CCDPACK however:
In CCDPACK three methods for determining image feature correspondence are provided. Two of these rely on transformations between coordinate systems being ‘well modelled’ by simple offsets. These methods have the advantage of automated and semi-automated processing. The third method relies on considerable interaction but can deal with transformations of scale, magnification and shear as well as offsets (i.e. general linear transformations).
If images are well modelled by simple offsets apart from a known transformation, for instance a rotation of the instrument between observations, then a coordinate system describing the known transformation can be added using WCSEDIT or ASTIMP, and the automated methods still used.
The routine which determines the transformations between labelled position lists is called:
Labelled position lists are those in which the same objects (image features) have the same identification number. So for instance if a star say no. 100 is present on several datasets it should be labelled no. 100 (or any other unique value) in all the datasets in which it appears regardless of its coordinates. In many ways getting labelled position lists may be considered as most of the process of object-matching-based registration, the rest being essentially straight-forward.
The main mapping used by REGISTER is the linear mapping:
A-F are the coefficients which are to be determined,
Y the current coordinates and
YY the new coordinates. REGISTER supports various types of linear transformations
When using a linear fit you can register a whole list of datasets in one go. So for instance if you have a set of position lists in which corresponding objects have been identified, you may pick any list as the reference set (the first is chosen by default) and all the mappings between this and the other datasets will be derived. If the position lists are ‘associated’ with images then a new coordinate system, by default labelled ‘CCD_REG’, will be added to each image: the coordinates are the same as the pixel coordinates of the reference image.
A general transformation between two datasets can also be determined. These are entered using suitably parameterised algebraic expressions. A general least squares fitting algorithm is used to find a solution which gives satisfactory values for the parameters.
If the coordinates of your images are just offset from each other (related by a translation in X and Y) and they have image features in common it may be possible to register them with a minimum of effort and preparation. More precisely, the relation between coordinates does not have to be exactly an offset, but to be sufficiently ‘well modelled’ by an offset, that is with distorting terms small enough that the error in positioning is in general a larger effect.
Automated registration is performed by the applications:
used in that order.
FINDOBJ locates and centroids image features, FINDOFF determines the correspondence of the
image-features and REGISTER produces the mappings from this information. This sequence is used in
the demonstration script
ccdexercise described in §3.
FINDOBJ works by looking for pixels above a threshold value, objects are then identified as groups of ‘connected’ pixels. The groups of connected pixels are then centroided to give an accurate position. FINDOBJ has a number of parameters which can be tweaked to identify a suitable group of objects from the data; however if you can’t make it work satisfactorily you could use the interactive program IDICURS instead for this step.
FINDOFF is the crucial application in this sequence, it performs pattern-matching between all the object positions. It assesses the degree of match found between each pair of frames and assigns it a weight. The best matches are then used to identify corresponding objects on each frame. The inter-comparison process which provides the pattern-matching facilities uses two algorithms, one which matches all the point pair-offsets between any two input lists, counting all the other points which are then paired (within an error box). The match with the most positions paired is then chosen. The second uses a statistical algorithm based on histograms of the differences in the offsets (where the peak in a histogram is assumed to be the most likely difference). In each case an estimate of the positional error must be given as it is used when deciding which positions match or as the bin size when forming histograms.
Which algorithm you should use depends on the number of points your position lists contain and the expected number of objects in the overlaps. Obviously it is much easier to detect a match between two lists which have most of their positions in common. With small overlaps a serious concern is the likelihood of finding a ‘false’ match. False matches are more likely the larger the datasets and the smaller the overlaps.
The first algorithm (named SLOW) is the most careful and is capable of pairing positions in data with small overlaps (although a level of false detections will always be present) but the process is inherently slow scaling as . The second algorithm (named FAST) is an process so is much quicker, but requires better overlap statistics. The maximum time taken for determining the correspondence of two position lists for the SLOW algorithm are shown in table 1. If you intend to process large lists then take heed.
Because the FAST process takes so little CPU time it is better to try this first (without the SLOW process as a backup, FAILSAFE=FALSE), only use the SLOW algorithm when you have small datasets or do not have large overlaps.
Having obtained estimates of the offsets between each dataset pair and the number of positions in common, the next stage is to determine a global solution to the registration of all the datasets. A major consideration is the possible presence of false matches.
The global registration process works by forming a graph with each position list at a node and with connecting edges of weight the number of matched position-pairs. The edge weights may be modified by a completeness factor which attempts to assess the quality of the match (this is based on the ratio of the expected number of matches in the overlap region to the actual number, random matches shouldn’t return good statistics when compared with genuine ones). This still leaves a possibility of false matches disrupting any attempt to register the datasets so a single ‘spanning tree’ is chosen (this is a graph which just visits each node the minimum number of times required to get complete connectivity, no loops allowed) which has the highest possible number of matched positions (rejecting edges with few matched positions/low completenesses where possible). This gives a most likely solution to the offsets between the position lists, rather than the best solution which could well include false matches; compare this with a median as opposed to a mean. This registration is then used to identify objects in all datasets, resulting in labelled position lists which are output for use by REGISTER.
Note it is not necessary that the pixel grids of the images are simply related by an offset, but that their Current coordinate systems are. For instance you might have two images with reasonably accurate SKY coordinate systems which were attached by the observing system, but which you wish to register more accurately by matching image features. The telescope was rotated as well as shifted between the observations so that the X-direction of one data array is different from the X-direction of the other. But since the SKY coordinate system of each only has small errors, the RA-direction of each is the same. Then as long as SKY is the Current coordinate system, the methods in this section can be used to align the images automatically (see section §8.10 for an example). If you know the non-offset parts of the transformation but they are not already present as attached coordinate systems, then you may be able to add them yourself using WCSEDIT or ASTIMP as explained in later sections.
Invoking FINDOBJ, FINDOFF and REGISTER is simple.
This locates the objects on all the images in the current directory, performs pattern-matching and
finally registers them, attaching a new coordinate system labelled ‘CCD_REG’ to each image. See
“§8.2.5” if using ‘
*’ for both the IN and INLIST parameters seems mysterious.
If the coordinates of the images are already quite well matched (offsets are expected to be small) and the object matching is just to improve the alignment, then FINDOFF can be invoked with the RESTRICT parameter; this instructs it to look for matching objects only in the parts of the images which overlap according to the existing Current coordinate system, which can dramatically decrease running time and the likelihood of a false match.
FINDOFF’s other parameters are detailed in §0 and some of its internals are explained in P.W. Draper, 1993, ‘Preparing multiple CCD frames for the photometry of extended fields’, Proceedings of the 5th ESO/ST-ECF Data Analysis Workshop.
When datasets are offset in their Current attached coordinate system only by X and Y translations as above, but contain insufficient objects for the fully automated registration described in the previous section, registration may be done in a semi-automated way, using the programs:
PAIRNDF displays the images to be registered and allows the direct selection of image features which are common between image pairs. This works by asking you alternately to choose a pair of images which has an overlap, and then to line them up. When enough of the datasets have been selected this way, and successfully paired, the global correspondence of the image features is determined. Using this method avoids the need to identify each image feature in a particular sequence, they are just selected as being in ‘common’ between any pair, PAIRNDF works out the rest using the same methods as FINDOFF, except a single spanning tree is not chosen since in this case, all pairings are assumed to be correct.
Both the pair selection and the pair alignment are done using an intuitive graphical user interface on the Xwindows display. During pair selection you can see any combination of two images side by side, along with their names and other information such as selected FITS headers. They are displayed resampled into their current coordinate frames, so that it should be easy to see where any overlap between the two is. When you have selected a pair which contains objects common to both, you can move to the alignment stage in which you drag one image with the mouse to the correct position on top of the other. The display can be zoomed and scrolled to make careful positioning easy. You are then asked to select centroidable features in the overlap region so that the program can determine the offset between the two accurately.
To summarise, during the graphical part of PAIRNDF you have to do the following:
If in the final step you select Exit yourself rather than pairing all the images, then PAIRNDF will not be able to register all the datasets. In this case it will behave the same as FINDOFF, which is to do all the alignment it can, and associate position lists with those files whose alignment it can work out. Running REGISTER will then add coordinate systems only for those which you have paired, and you will have to align the others in a different way, perhaps with the help of WCSREG.
As with all graphically based interfaces it is difficult to describe the advantages and ease of use, it’s best to try them out.
To align datasets that require more general linear transformations than a simple offset use the:
procedure. This accepts either a sequence of images or related images (datasets which are already approximately aligned, at least within the capabilities of centroiding few pixels) and displays them one by one (or the first member of each group). You then have to simply identify the image features to use, but in the correct order. Only enough image features to identify the approximate image position are required as the procedure then centroids the image features, works out an approximate registration which it then uses to extrapolate the positions of a reference set on each dataset. This new extended set of positions are now centroided picking up any missed objects. The reference set of positions are either selected from a designated image or from the first image. CCDALIGN can then invoke REGISTER itself to perform the fitting of the transformation parameters using all these positions.
Selection of points on the images is done using an inuitive Xwindows-based graphical user interface, which allows you to mark and remove points with the mouse, change the brightness of the display, zoom and scroll the image, and read out positions in any of the image’s attached coordinate systems.
To summarise, during the graphical part of CCDALIGN you have to do the following:
In CCDPACK the positions of identified/detected image features are stored in ordinary text files which are referred to as ‘position lists’. The format of these lists is flexible. Usually position lists have three columns:
these may be separated by commas or blanks. The identifier is an integer value which is used to identify positions which are related (i.e. are of the same object) in different lists.
If more than three columns exist then only the values in the first three are used, though some of the programs propagate extra columns from the input to the output lists. If only two columns exist it is assumed that they are:
such lists may be produced by KAPPA applications. In this case applications which rely on a knowledge of the identifiers assume they are monotonically increasing from one.
Whole and in-line comments are allowed in position lists using the character ‘
#’. These are not
propagated from input to output lists. Note that the final line of the file should be terminated by a
newline character (certain text editors do not always enforce this).
The X and Y values in a position list always refer to the pixel coordinates of an image. When FINDOFF or REGISTER read them they automatically transform them into the Current image coordinate system where appropriate. For this reason there is not normally any need to transform a position lists the corresponding image has been resampled; however if necessary it can be done using the TRANLIST routine. This is discussed, along with the methods that previous versions of CCDPACK used for storing coordinate transformations, in appendix C.
Usually position lists are ‘associated’ with images. What this means is that when a position list is created a record of its name is kept in the extension of the image. It is then usual to refer to the image instead of the position list when the position list is to be accessed. Applications which create new position lists associate the new position lists with the appropriate image. Using this method avoids any confusion about the relationship of position lists and images, which is vital when determining the registration of many images at one go. It also allows the use of the wildcarding properties of image names to access position lists.
The association of position lists can be disabled completely by setting the NDFNAMES global parameter (CCDSETUP) to false. In this case, position lists must be specified explicitly as a comma separated list of names or gotten from a text file using indirection, this is exactly the same as for naming images (see “§13”) except that wildcards are not allowed. Output list names may be formed from the modification of these names when NDFNAMES is false, otherwise the input image names are always used.
The name of the position list associated with an image is stored in its .MORE.CCDPACK extension under the item CURRENT_LIST, and can be examined using HDSTRACE (SUN/102) or modified using CCDEDIT.
If you wish to view or edit positions in a list interactively, you can use the IDICURS program, which displays a graphical interface in which the image is displayed with points from a position list plotted over it. Points can be added and removed with the mouse. If you want to display points in a list non-interactively or to print them out, use PLOTLIST.
CCDPACK provides the routine
for direct manipulation of the coordinate system information attached to an image or a group of images. It can examine, add, remove or modify coordinate systems, and select the coordinate system to be regarded as Current.
To examine the coordinate systems attached to an image, you can use WCSEDIT with the parameter MODE set to SHOW; for example:
This shows that there are five coordinate systems in the image and that number 4, labelled ‘SKY’ is the Current one. Note that the first three coordinate systems attached to an image are always GRID, PIXEL and AXIS. SKY is not always present but if it is, it should always represent a celestial coordinate system. GRID and PIXEL always have units the same size as that of a pixel (the difference is that GRID is guaranteed to start at (1,1)).
For the other modes (add, remove, current and set) of WCSEDIT you need to specify a given coordinate system, the ‘target’, for WCSEDIT to work with. This is given using the FRAME parameter, and you can use one of the following formats:
!’), indicating the Current coordinate system
"Sky Co-ordinate System
"(SCS) value such as EQUAT(J2000) (see section “Sky Co-ordinate Systems” in SUN/95).
The first two options are usually the most appropriate. As you can see above, the output of ‘
show’ will show you what domains there are, what the index of each is, and which coordinate system
is Current for a given image.
To change the coordinate system to be used as the Current one therefore, simply write something like:
which will set the Current coordinate system of all the image files indicated to pixel coordinates. Since the PIXEL coordinates are in some sense the native ones, if you set Current to PIXEL in this way the images will behave in most respects as if they had no attached coordinate systems at all.
Syntax for removing coordinate systems is much the same:
will remove the fourth (as listed by
wcsedit show) coordinate system from file
When adding a new coordinate system you must give the transformation which connects it to the target coordinate system. The transformation can be one of the following types:
A positive corresponds to a pincushion distortion and a negative corresponds to a barrel distortion.
So the following command would add a new coordinate system, labelled ‘SQUASHED’, representing a barrel distortion of the coordinates labelled ‘FOCAL’ having a magnitude of and an optical centre at coordinates (1000,1000):
It’s also possible to make fine adjustments to the coordinate systems attached to an image using the SET mode, for example:
changes the name of the Current coordinate system to ‘OBS1’. For more sophisticated use of this feature, see the documentation of WCSEDIT in appendix §B, and of the AST_SET routine in SUN/210.
When run, WCSEDIT will log what it has done, giving the domain of the altered coordinate system where appropriate (even if it was specified in some other way). If it could not perform the requested action on any of the images in the list, an appropriate message will be written, but this does not consitute a fatal error. However, it does write an list to an output file (by default called WCSEDIT.LIS), giving the names of only those images which were successfully accessed, which normally means those which had a coordinate system matching that given by the FRAME parameter. So it’s easy to find out which images were successfully modified:
This name list file can be used as an indirection file to pass to the input of another CCDPACK task. For instance, if you want to do an interactive alignment of only those files which have coordinate systems with the name “FOCAL”, you could follow the above command with this:
Coordinate systems can be exported from and imported to image files using the commands:
(the MAKESET routine can be used as a substitute for ASTIMP if CCDPACK Sets are being used — see Section 9).
Sometimes the coordinate transformations which must be applied to one set of observations are the same as those required for many other sets. One example of this is the observations from a mosaic CCD camera, in which multiple CCD chips are fixed adjacently in the same focal plane to avoid needing a single very large device; the relative position of each chip to the others will not change as a function of pointing direction. Another example is when the optical distortion of an instrument (at a given wavelength) is known; if this can be applied correctly to one observation it can be applied to many. A third example, which is common because the large focal planes implied by use of a mosaic camera often lead to significant optical distortions, is the combination of these two.
CCDPACK allows this kind of coordinate transformation information to be written to (using ASTEXP), and read back from (using ASTIMP), an external file called an AST file. This effectively stores a coordinate system, and enough information to graft it onto suitable files, for each one of a related group of images. Thus applying the AST file to a set of images adds a new coordinate system to each of them, which can be used for the registration process. Where there are several images in a set, as in the case of a mosaic camera, ASTIMP can determine which coordinate frame to use for which image either by the order in which they are presented, or by usng FITS headers in the file, according to how the AST file was constructed by ASTEXP.
Full details on how to use ASTIMP and ASTEXP can be found in appendix B, but the following gives an example.
The first command takes a set of 4 images, one from each chip of an array, which are aligned in their
Current coordinate system, (possibly by object matching) and constructs a file
inst.ast from them;
the parameters ‘IDTYPE=FITSID FITSID=CHIPNAME’ mean that the AST file labels each coordinate
system with the value of the ‘CHIPNAME’ FITS header from the
reg_data* images, since this
dentifies which CCD is which in the mosaic camera. The second command applies the file
inst.ast to a set of (any number of) images from the same instrument which have not
yet been registered; for each one ASTIMP works out which coordinate system to add by
matching the ‘CHIPNAME’ FITS header in the image with one of the ones in the AST
If a previously prepared AST file for the instrument you are using exists, then the ASTEXP step can be avoided. In any case, once a suitable AST file is available, it can be applied to many sets of images from the same instrument, as long as the instrument’s characteristics remain the same.
There is a more detailed example of the use of ASTIMP in section §8.10.
To combine registration information from different sources, you can use the routine
It is not always possible to register a group of images in a single step, using one kind of procedure. For instance you might have a large group of images with an approximate SKY coordinate system inserted by the observing system, of which some but not all overlap with each other. In this case we would like to register the overlapping ones using the object-matching techniques described in section 8.2, but fall back to the tolerably accurate SKY coordinates to align the rest of the images.
WCSREG can do just this. You give it a number of images, and a list of coordinate systems it can use to make the connections, and if it’s possible to align them all using only these coordinate systems it will do so, adding a new coordinate system labelled ‘CCD_WCSREG’ to them all. In the above example you would write something like
Where there is a choice between coordinate systems to use for alignment, ones given earlier in the DOMAINS list are preferred. The square brackets around the named domains are required here because it is an array of strings.
The new coordinate system added is a copy of (and so has the same units as) the PIXEL coordinate system of the reference image, which is normally the first one listed. Because of this, if you have a set of images which are already aligned, but in an unsuitable coordinate system, you can align them in a pixel-sized coordinate system using WCSREG. Giving a null (‘!’) value for the list of domains will automatically use the current attached coordinates of the reference image for alignment, so that running
will ensure the group of already registered images
regdat* keeps its existing alignment, but in a
coordinate system in which a unit is pixel-sized. This can be a useful trick prior to resampling, since
the resampling program TRANNDF will resample so that a pixel of the output image is one unit
square; for this reason resampling directly into a SKY coordinate system, in which a unit is one radian,
is bound to give a useless result.
In many cases it will be possible to align a set of images in one step without worrying about these sorts of procedure. For the more complicated cases where coordinate information of varying levels of accuracy and completeness is available from several different sources however, WCSREG, in conjunction with the other programs described in this section, provides powerful facilities for making use of this information. Some examples of them at work can be found in §8.10.
The previous sections describe various methods for aligning a set of images in a common coordinate system. Once this is done, you will usually want to resample and combine or compare them. Before, instead of, or after that however, you might want to see the positions of the images in the aligned coordinates. You can do this using the application:
If you give DRAWNDF a list of images which are all aligned in their Current attached coordinate system, it will plot outlines of the regions covered by each image file. In this way it is easy to see which parts of your data cover which parts of the coordinate space.
There are two basic ways of running DRAWNDF, according to whether the parameter CLEAR is set to TRUE or FALSE. If CLEAR is TRUE, then the graphics device will be cleared and the outline of the area covered by each image will be shown. By default, a set of axes will be drawn as well. This will give you a good idea of the alignment of your files relative to each other, and by examining the axes you can see the absolute positions too. It can often be a good idea to do this with a set of data files before you resample and combine them, which may be a time-consuming step, just to check that the alignment looks sensible. You can see an example of this in Figure 3.
If CLEAR is FALSE however, DRAWNDF will try to align the outlines with an existing picture on the graphics display. This means that if you have already displayed an image on your graphics device which shares a coordinate system with your data files, you can see where they would fit over it. For instance you could show an image which has SKY coordinates using KAPPA’s DISPLAY program and then run DRAWNDF on a set of data files in the same region; if their Current coordinates are also SKY you will be able to see how they map onto the displayed image. Another useful application is to DISPLAY a mosaic which has been made by combining a set of data files, then to run DRAWNDF on the originals so it is clear which file has contributed to which part of the mosaic. To achieve this alignment with previously drawn graphics, DRAWNDF uses the AGI graphics database in the same way as KAPPA programs; this is described more fully in SUN/95. You can see an example of this in Figure 4.
Finally, DRAWNDF can display the actual image data for you. Unlike KAPPA’s DISPLAY program, which always displays an image as a rectangle whose edges are horizontal and vertical, perhaps with with non-orthogonal image Current coordinate axes drawn over it, DRAWNDF always plots on a surface in which the Current coordinate system has horizontal and vertical axes. The image data array is therefore resampled to fit within the image bounds as they would appear in these coordinates. This can give you a good view of what a group of images will look like when they have been resampled and combined into their Current coordinates. Note however that unlike MAKEMOS, DRAWNDF performs no averaging or sophisticated normalisation of images, so that if two images overlap in their Current coordinates, all but the last-plotted will be obscured, and the relative brightnesses may not be right. When this option is used, the display is always cleared. An example of a set of frames from a mosaic camera displayed in this way is given in Figure 5.
Data resampling is normally performed by the application:
This resamples an image from pixel coordinates into its Current coordinate system. So if a set of images shares a common Current coordinate system (as added by any of the methods described earlier in this section) then running TRANNDF on them all will enable them to be compared or combined pixel for pixel. The transformation between pixel co-ordinates and the Current co-ordinate system can be of any kind, so the program has the capability of ‘rubber-sheeting’. A restriction is that both the forward and the inverse transformations must be available. This will only cause problems when using general transformations; a tractable inverse doesn’t often exist. If you need to resample and you do not have an inverse transformation, the DRIZZLE program may be used instead, though note this is not designed for general purpose resampling, and is slower than TRANNDF.
TRANNDF resamples using one of two different techniques, linear interpolation or nearest neighbour. Linear interpolation uses two pixels to estimate the new pixel value, nearest neighbour just uses the nearest pixel. Flux conservation is available but is only supported for linear transformations. Variances are resampled in the same way as ordinary data.
The size of the output images can be estimated from the transformation of selected boundary positions, which is very useful when transforming whole lists of images as the output images are made only as big as necessary.
To use TRANNDF type something like:
This resamples all the images in the current directory naming the output images the same as the
inputs except that the string ‘
-trn’ is appended to each output name. By default TRANNDF
will guess a size for the output image, interpolate using nearest neighbour and conserve
Note that TRANNDF resamples into the Current coordinate system, so that each pixel in the output image is a square in those coordinates. Thus the coordinate system in question must be a suitable one for that purpose. If it has units of the wrong size, then a suitable transformation can be made by adding a new coordinate system with the WCSEDIT or WCSREG programs. In particular it’s no good running TRANNDF on a set of images which have SKY coordinates as Current, since they have units of radians, which are far too big. See the example in §8.5 for how to deal with this.
For a quick look at what the resampled data will look like, the DRAWNDF command with IMAGE=TRUE can be used.
The tasks of mosaicing and normalisation are normally performed using the:
program. MAKEMOS is a comprehensive program and has many capabilities. In its default mode MAKEMOS just combines images using a selected data combination method (MAKEMOS supports several methods: mean, median, trimmed mean etc. §B.1.3). In this it is similar to the MAKECAL routine, but MAKEMOS makes much more efficient use of memory (it is designed to deal with datasets which may not have much overlap and which might have a very large output extent, unlike with CCD calibration data where the overlap will usually be complete).
The other capabilities of MAKEMOS are concerned with data normalisation. Normalisation is determined as two components, a scaling factor and a zero point factor. These may be controlled independently by the parameters SCALE and ZERO. So:
would determine just scale factors, just zero points or both scale factors and zero points respectively. The option also exists to modify the data values of the input datasets so that their values are normalised (this may be combined with producing a mosaic or not using the OUT parameter). A full description of MAKEMOS is given in appendix §0, some of the philosophy of its algorithms are explained in R.F. Warren-Smith, 1993, ‘The Calibration of Large-Field Mosaics’, Proceedings of the 5th ESO/ST-ECF Data Analysis Workshop.
In most cases resampling and mosaicing should be done using TRANNDF and MAKEMOS, but for certain specialised applications the
program may be superior. It combines resampling and mosaicing in one task and the resampling is done using the Variable-Pixel Linear Reconstruction (or informally “drizzling”) algorithm.
This algorithm was originally developed for the Hubble Deep Field, a project whose purpose was to image an otherwise unexceptional region of the sky to depths beyond those of previous astronomical images. Its primary application is to provide a method for the linear reconstruction of an image from a set of undersampled dithered data; it preserves photometry and resolution, and can weight input images according to the statistical weight of each pixel.
The input grid is mapped onto an output grid which is normally finer; where many dithered input images are being combined this allows subsampling of the input data. Before the pixels are resampled onto the output grid however, they are shrunk into smaller pixels, now referred to as “drops”, which rain down onto the output grid. Each drop affects the pixels in the output grid which it covers in proportion to the area of overlap; if an output pixel is not touched by any of the drops it receives no data from the image. This is shown schematically in figure 6.
The transformation of input grid to output grid is basically the transformation between the pixel coordinates and Current coordinates of the image file; however an additional shrinking factor can for convenience be given using the MULTI parameter of DRIZZLE. The ratio of the linear size of the drop to the size of the input pixel is controlled by the parameter PIXFRAC. The default values for these parameters are 1.5 and 0.9 respectively.
The algorithm has a number of advantages over the resampling and combination methods provided by TRANNDF and MAKEMOS. Since the area of the pixels scales with the Jacobian of the geometric transformation, the algorithm preserves surface and absolute photometry. Flux can therefore be measured using an aperture whose size is independent of position on the output frame. As the algorithm anticipates that output pixels may not receive data from a given input pixel, bad pixels do not cause significant problems, so long as the stack of input images is sufficient to fill in the gaps caused by these zero-weight input pixels. Shifts of a few pixels between input images therefore allow the user to remove small scale defects such as hot pixels, bad columns and cosmic ray hits. Additionally, non-integral drizzling allows the user to recover some information lost to undersampling of the point spread function by the CCD pixels.
However due to the nature of the algorithm, it is computationally intensive, and it is important to consider whether any advantage will be gained from its use. It has been primarily designed to combine undersampled image data in an attempt to reconstruct dithered CCD images. Unlike the resampling methods used by the TRANNDF routine, drizzling requires the forward rather than the inverse mapping for the geometric transformation.
If you intend to make use of DRIZZLE it is recommended that you read the paper A.S. Fruchter and
R.N. Hook, 1998, “A Method for the Linear Reconstruction of Undersampled Images” (PASP, in press)
which discusses the algorithm in depth. Further information can also be found on the web at
http://www.stsci.edu/%7Efruchter/dither/drizzle.html where the authors of the algorithm
discuss its use in reconstructing the Hubble Deep Field.
The DRIZZLE program supports the full range of normalisation capabilities described above for MAKEMOS, however it also allows you to supply scaling and zero-point factors via a plain text input file. If you wish the DRIZZLE program to carry out scaling and/or zero-point corrections this is controlled via the SCALE and ZERO parameters (as for MAKEMOS) and, additionally, the CORRECT parameter. So:
would determine (and apply) both scale and zero-point corrections. However, DRIZZLE also has the capability to read scale and zero-point corrections in from a (correctly) formatted file by setting the CORRECT parameter to point to the file. So:
would read the corrections in from the file
drizzle.dat. MAKEMOS has the ability to generate this
file, although it should be noted that it does so in the same manner as DRIZZLE finds the corrections,
so there is no advantage in doing it in two steps like this; the facility is provided for users who wish to
normalise their frames very accurately.
A full description of DRIZZLE is given in appendix §B.3. Since it does its own resampling, the remarks about the units of the Current coordinate systems made about TRANNDF apply here — see §8.7.
In this example, we have a set of mutually overlapping images which already contain approximately correct SKY coordinates, attached by the observing system. We would like to fine-tune this alignment by matching image features.
We begin by setting SKY as the Current coordinate system for all the images (they may be in this state anyway but it doesn’t hurt to make sure):
Since the SKY (now Current) coordinates are approximately right, they take account of any rotations of the telescope between exposures, so that the images are effectively related by a simple offset, which means the automated object matching programs can be used. Additionally, since we know the offset is small, we can tell FINDOFF (by setting the RESTRICT parameter) to look for matching objects only in the regions which ought to overlap, which makes it faster and less prone to finding a false match.
The REGISTER program adds a new pixel-like coordinate system labelled CCD_REG to all the images. We can now resample all the images into these coordinates, which aligns them pixel-for-pixel, and combine them into a mosaic.
Finally, we set the Current coordinate system of the mosaic to SKY, so that when it is displayed RA and Dec coordinates will be shown.
In this example we again wish to use coordinate information from different sources, but the procedure is more involved, bringing together many of the capabilities discussed in the earlier sections. If your data are fairly simple, you may never need to use such a complicated procedure.
Note that while this example provides a good demonstration of much of the CCDPACK World Coordinate System functionality, the steps here could be achieved more simply by using CCDPACK Sets. See the example in Section 9.7.2 for details.
We suppose that we have two observations from a mosaic camera, which were taken on the sky in
roughly the positions shown in Figure 7. Because of the way they overlap the object matching
methods of section §8.2 can be used to apply a common coordinate system to some, but
not all, of the images. Furthermore, the geometry of the CCD chips in the instrument is
known and stored in an AST file
instrument.ast as in section §8.4, so that a common
coordinate system can be applied to images
1c, and another to images
First of all therefore, we attach the instrument coordinates to each set of files as follows:
instrument.ast has been set up previously so that it attaches coordinate systems to the
images according to a FITS header which distinguishes them from each other. The FITS headers in
this case contain no information about telescope pointing or orientation however, so we
have to add our knowledge that the camera was rotated 180 degrees between exposures
by adding a new coordinate system to one set, related to the one just added by a linear
In fact since this is quite a common situation the ASTIMP command has a parameter for adding rotations, so that the second ASTIMP and the WCSEDIT could be replaced by:
— this step can also be automated under some circumstances by using ASTIMP’s FITSROT parameter. Since all the images now have the same orientation (i.e. the X directions in the Current coordinate systems of all is the same), they are related by a simple offset so that the automatic object matching routines can be applied to the overlapping frames:
Note that unlike the previous example we do not use
restrict=true here, since although the
orientations are now (about) right the positionings may not be. If FINDOFF has trouble or you would
rather do the alignment by eye you could use PAIRNDF instead of FINDOBJ and FINDOFF like
before continuing with the REGISTER command.
To see what coordinate systems have now been attached we can examine the images using WCSEDIT:
and similarly for the other three. We can see that, because of the way the images overlap, REGISTER has
managed to add an aligning coordinate system to
1c, but not
1a. The following table summarises the
coordinate systems that we have added to all the images (ignoring the ever-present GRID, PIXEL and AXIS):
This completes successfully and adds a new coordinate system labelled CCD_WCSREG to each of the images. Since they are all aligned with the same coordinates, we should now be ready to do the resampling. It is a good idea at this stage to check that the alignment looks right before generating the mosaic, which may be time-consuming. You can see the alignment by using the DRAWNDF command:
This should display a plot which resembles Figure 7; if it shows an alignment different from what you expect, now is the time to go back and find the problem rather than producing an incorrect mosaic.
Assuming all is well, it only remains to do the resampling and combination as before:
To see this sort of procedure in action you can run the demonstration script
wcsexercise described in
Finally, we give an example of an AST file which has been constructed for a real instrument. The instrument
is the Wide Field Camera on the Isaac Newton Telescope at La Palma, and consists of four CCDs each
pixels. Because the area occupied by detectors is large, optical distortion effects become
quite important near the edges. After registering four images
included taking account of a pincushion distortion, the file was constructed by the following
The OUTDOMAIN and OUTTITLE parameters write labels indicating the source of the coordinate
FITSID=CHIPNAME parameters indicate that the chip which each
one comes from is to be distinguished by the value of its “CHIPNAME” FITS header card, and the
FITSROT=ROTSKYPA parameter indicates that the “ROTSKYPA” FITS header card contains an angle
in degrees through which the telescope has been rotated. Applying this to a set of files from the same
instrument has the following effect:
Since they now share a coordinate system they can now be, for instance, aligned with other images from the same instrument or resampled directly.
The AST file can be found in
$CCDPACK_DIR/INT-WFC.ast. It is thought to be correct to an accuracy of 1
or 2 pixels, which is around half an arcsecond, over the whole focal plane.
One outline for using the various applications is shown in Figure 8. This illustrates the ways to proceed if you are using the object matching methods (§8.2) for alignment — if you are using coordinate system information from other sources then the routines WCSEDIT, WCSREG, ASTIMP will have a part to play as well.