In an ideal world you would not need to know how your data are stored. It would be transparent. Starlink applications attempt to achieve this through standard, but extensible, data structures, and the ability to apparently operate on other formats through the so-called ‘on-the-fly conversion’ (see Section 18.1 and SUN/55).
The official standard data format used by Starlink applications is the NDF (Extensible n-dimensional
Data Format, SUN/33). The data in an NDF is stored using HDS which has numerous
advantages, not least that HDS files are portable between operating systems; both have file extension
.sdf
.
The NDF has been carefully designed to facilitate processing by both general applications like Kappa and specialist packages. It contains an n-dimensional data array that can store most astronomical data such as spectra, images and spectral-line data cubes. The NDF may also contain other items such as a title, axis labels and units, error and quality arrays, and World Co-ordinate System (WCS) information. There are also places in the NDF, called extensions, to store any ancillary data associated with the data array, even other NDFs.
The NDF format and its components are described more fully in NDF standard components, which includes commands for manipulating the components.
The NDF format permits arrays to have seven dimensions, but some applications only handle one-dimensional and/or two-dimensional data arrays. The data and variance arrays are not constrained to a single data type. Valid types are the HDS numeric primitive types, see Appendix J.
Many applications are generic, that is they can work on all or some of these data types directly. This
makes these applications faster, since there is no need to make a copy of the data converted to the type
supported by the application. If an application is not generic it only processes _REAL data. Look in
the Implementation Status
in the help or the reference manual. If none is given you can assume that
processing will occur in _REAL.
In Kappa the elements of the data array are often called pixels, even if the NDF is not two-dimensional.
By default, Kappa plays safe and will not allow you to use the same data structure as both input and output for a command. This is to minimise the risks of accidentally over-writing valuable data. So, for instance, if you try the following command:
you will find that the value of m31
for Parameter OUT is rejected with a message indicating that the
data structure is already in use, and you will be prompted for an alternative value.
However, Kappa does allow you to ‘live on the edge’ if you prefer—if you define the environment
variable KAPPA_REPLACE
before running a command, then Kappa will happily overwrite the input
data structure if requested to do so. You can assign any value you like to this environment variable,
since its mere existence is the trigger for this optional behaviour. Note, this facility is only available in
those commands that access the input data structures before the output data structures (the vast
majority).
You can look at a summary of an NDF structure using the task NDFTRACE, and obtain the values of
NDF extension components with the setext option=get
command. Hdstrace (SUN/102) can be
used to look at array values and extensions.
There are facilities for editing HDS components, though these should be used with care, lest you invalidate the file. For instance, if you were to erase the DATA_ARRAY component of an NDF , the file would no longer be regarded as an NDF by applications software.
In Kappa, ERASE will let you remove any component from within an HDS container file, but you have to know the full path to the component. SETEXT has options to erase extensions and their contents, without needing to know how these are stored within the HDS file. It also permits you to create and rename extension components, and assign new values to existing components. There are a number of commands for manipulating FITS-header information stored in the NDF’s FITS extension. These are described in the FITS airlock.
Figaro offers some additional tasks (CREOBJ, DELOBJ, and RENOBJ) for editing HDS components.
Although HDS files are portable you are recommended to copy them to the host machine, and run application NATIVE on them for efficiency gains. NATIVE converts the data to the native format of the machine on which you issue the command. If you don’t do this, every time you access the data in your NDF , this conversion process occurs. NATIVE also replaces any IEEE floating-point NaN or Inf values with the appropriate Starlink bad value. The following converts all the HDS files in the current directory.