Starlink routines run on HDS files (Hierarchical Data System), which normally have a .sdf
extension (Starlink Data
File). HDS files can describe a multitude of formats. The most common HDS format you will encounter is the NDF
(N-Dimensional
Data Format)[7]. This is the standard file format for storing data that represent
N-dimensional
arrays of numbers, such as spectra, and images. The parameter files discussed in Section 2.3 are also HDS
format.
Raw data retrieved from the JCMT Science Archive come in FITS format. For information on converting between FITS and NDF see Appendix B.
.sdf
extension of HDS filenames is not required when running most Starlink commands (the exception
is Picard).
Parameters control the behaviour and data processed in Starlink applications. A parameter expects a value that is one of the following types: string, boolean, integer, or single- or double-precision floating point. You can either specify parameter values on the command line or in response to prompts. Most parameters have sensible defaults leaving you to concentrate on the main parameters. Parameter usage is described in a tutorial.
A directory called adam
is created in your home-space by default when you run Starlink applications. In this
directory you will find HDS files for each of the application that you have run. These files contain
the parameters used and results returned (if appropriate) from the last time you ran a particular
application.
You can specify a different location for the adam
directory by setting the environment variable ADAM_USER
to be
the path to your alternative location. This is useful when running more than one reduction to avoid interference
or access clashes.
To see the ADAM parameters, run Hdstrace on any of the parameter files. For example, to see which parameters were used and the results from last time you ran stats, you can type the following from anywhere on your system:
This will report the following
You can see stats was run on the data array with ordered statistics as well as the resulting values. Any of the parameters returned by running Hdstrace on an ADAM file can be extracted using the command parget. In the example below, the mean value from the last instance of stats is printed to the screen.
!!
at a prompt if possible to have a clean
exit.
parget is designed to make life easier when passing values between shell scripts. In the C-shell scripting example below, the median value from histat is assigned to the variable med. Note the use of the back quotes.
For more information on scripting our work see Appendix C.
If the parameter comprises a vector of values these can be stored in a C-shell array. For other scripting languages such as Python, the alternative vector format produced by setting parameter VECTOR to TRUE may be more appropriate. Single elements of a parameter array may also be accessed using the array index in parentheses.
In addition to running hdstrace on the ADAM file, you can find a list of all parameter names that can be returned with parget in the Kappa manual under ‘Results Parameters’ for the command in question. Note that these names may be different from the names returned in a terminal when running application.
There are two Kappa tasks which are extremely useful for examining your metadata: fitslist and ndftrace,
which can be used to view the FITS headers and properties, such as dimensions, of the data respectively. The
third option is the stand-alone application Hdstrace.
fitslist | This lists the FITS header information for any NDF (raw or reduced). This extensive list includes dates &
times, source name, observation type, band width, number of channels, receptor information,
exposure time, start and end elevation and opacity. In the example below, just the object name is
extracted.
Likewise, if you know the name of the keyword you want to view, you can use the fitsval command instead, for instance |
---|---|
ndftrace | ndftrace displays the attributes of the NDF data structure. This will tell you, for example, the units of
the data, pixel bounds, dimensions, world co-ordinates (WCS), and axis assignations.
An NDF can contain more than one set of world co-ordinates. The |
hdstrace | hdstrace lists the name, data type and values of an HDS (Hierarchical Data System) object. The following
example shows the structure of a time-series cube, including the pixel origin of the data structure. To show just
the first x
lines of values for each parameter include the option nlines=x on the command line:
Otherwise to see all the lines and information that is available in each extension use |
You can see it descends two levels (into MORE
and ACSIS
) to retrieve the information. Other information available
at this level include receptors and receiver temperatures.
Full details of ndftrace and fitslist can be found in the Kappa manual. Details on Hdstrace can be found in the hdstrace manual.
If you are presented with a data file you may wish to see what commands have been run on it and, in the
case of a co-added cube, which data went into it. Two Kappa commands can help you with this:
hislist | The Kappa command hislist will return the history records of the NDF.
Including the |
---|---|
provshow | The Kappa command provshow displays the details of the NDFs that were used in the creation of the given file. It includes both immediate parents and older ancestor NDFs. |
For all Starlink commands you can specify a sub-section of your data on which to run an application. You do this by appending the bounds of the section to be processed the NDF name. This may be on the command line or in response to a prompt.
The example below runs stats on a sub-cube within your original cube. This sub-cube is defined by bounds given for each axis. The upper and lower bounds for each axis are separated by a colon, while the axes themselves are separated by a comma. Note that the use of quotes is necessary on a UNIX shell command line, but not in response to a prompt or in many other scripting languages.
The bounds are given in the co-ordinates of the data (Galactic for Axes 1 and 2 and velocity for Axis 3). You can find the number and names of the axes along with the pixel bounds of your data file by running ndftrace. In the example above the ranges are 10.5∘ to 10∘ in Longitude (note the range goes from left to right), 0∘ to 0.25∘ in Latitude, and -25 to +25 km s−1 in velocity. To leave an axis untouched simply include the comma but do not specify any bounds.
To define your bounds in FK5 co-ordinates use the following format.
To write a section of your data into a new file (called newcube.sdf
in the example below) use ndfcopy with
bounds.
Here the option title
defines a title for the new cube which replaces the title derived from the
original.
You can also define a region as a number of pixels from a given origin. The example below extracts a 25×25 pixels cube around the position l=10∘, b=0∘.
The extent of an NDF section can be specified as an arc-distance using WCS co-ordinates in the format of "centre extent".
For instance, ’image(10:12:30 40am,-23:23:43 40am)’ will create a section of extent 40 arcminutes on both axes ("as" and "ad" can be used in place of "am", indicating arcseconds and degrees).
This only works if the NDF’s current Frame is a SkyFrame (an error is reported otherwise).
Command | Description | Usage |
showme | If you know the name of the Starlink document you want to view, use showme. When run, it launches a new web page or tab displaying the hypertext version of the document. |
|
findme | findme searches Starlink documents for a keyword. When run, it launches a new web page or tab listing the results. |
|
docfind | docfind searches the internal list files for keywords. It then searches the document titles. The result is displayed using the UNIX more command. |
|
Run routines with prompts | You can run any routine with the option |
|