x
) in an unallocated position
x
is not in the menu. The options are…
A detailed list of error codes and their meanings is not available. Kappa produces descriptive contextual error messages, which are usually straightforward to comprehend. Some of these originate in the underlying infrastructure software. Error messages from Kappa begin with the name of the application reporting the error. The routine may have detected the error, or it has something to say about the context of the error.
The remainder of the section describes some difficulties you may encounter and how to overcome them. Please suggest additions to this compilation.
When running Kappa from the UNIX shell, your command fails with a “No Match” error message.
This means you have forgotten to protect a wildcard character, such as *, ?, so they they are passed to
the Kappa command and not interpreted by the UNIX shell. You can precede the wildcard character
with ,
or surround the wildcard characters in
" "
quotes. Here are some examples.
Error messages like “Unable to create a work array” may puzzle you. They are accompanied by
additional error messages that usually pinpoint the reason for the failure of the application to
complete. Many applications require temporary or work space to perform their calculations. This
space is stored in an HDS file within directory $HDS_SCRATCH
and most likely is charged
to your disc quota. (If you have not redefined this environment variable, it will point to
your current directory.) So one cause for the message is insufficient disc quota available to
store the work space container file or to extend it. A second reason for the message is that
your computer cannot provide sufficient virtual memory to map the workspace. In this
case you can try increasing your process limits using the C-shell built-in function limit
.
You can find your current limits by entering limit
. You should see a list something like
this.
The relevant keywords are datasize
and the vmemoryuse
. In effect datasize
specifies the
maximum total size of data files you can map at one time in a single programme. The default
should be adequate for most purposes and only need be modified for those working with
large images or cubes. The vmemoryuse
specifies the maximum virtual memory you can
use.
sets the maximum size of mapped data to 32 megabytes. Values cannot exceed the system limits. You
can list these with the -h
qualifier.
Although you can set your limits to the system maxima, it doesn’t mean that you should just increase your quotas to the limits. You might become unpopular with some of your colleagues, especially if you accidentally try to access a huge amount of memory. If you cannot accommodate your large datasets this way, you should fragment your data array, and process the pieces separately.
After receiving this error message in an ICLsession you may need to delete the scratch file by hand.
The file is called txxx.sdf
, where xxxx
is a process identifier. A normal exit from ICLwill delete the
work-space container file.
Some applications read the name of the NDF used to create a plot or image from the graphics
database in order to save typing. Once in a while you’ll say “that’s not the one I wanted”. This is
because AGI finds the last DATA
picture situated within the current picture. Abort the application via
!!
, then use PICCUR or PICLIST to select the required FRAME
picture enclosing the DATA
picture, or
even select the latter directly. You can override the AGI NDF also by specifying the required NDF on
the command line, provided it has pixels whose indices lies within the world co-ordinates of the DATA
picture. Thus
will inspect the NDF called myndf. The command PICIN will show the last DATA picture and its associated NDF.
You may receive an error message, which says failed to store such-and-such picture in the graphics
database. For some reason the database was corrupted due to reasons external to Kappa. Don’t worry,
usually your plot will have appeared, and to fix the problem run GDCLEAR or delete the database file
($AGI_USER/agi_
node.sdf
,
where you substitute your system’s node name for
node).
You will need to redraw the last plot if you still require it, say for interaction.
The reason for invisible line graphics on your graphics device is because it is drawn in black or a dark grey. Most likely is that some person has been using other software on your graphics device or that is has been reset. PALDEF will set up the default colours for the palette, and so most line graphics will then appear in white. Alternatively,
If the above error appears from DAT_SLICE and you are (re)prompted for an NDF , the most likely cause is that you have asked an ?? application to process an NDF section. Use NDFCOPY to make a subset before running the application in question, or process the whole NDF.
This means that you have forgotten to ‘escape’ parentheses, probably when defining an NDF section
in the UNIX shell. Try inserting a backslash before each parenthesis or enclosing all the special
characters inside " "
quotes.
x
) in an unallocated positionCheck the usage of the application you are running. One way of adding positional parameters
unintentionally, is to forget to escape the "
from the shell when supplying a string with spaces or
wildcards. For example, this error would arise if we entered
instead of say
which protects all special characters between the single quotes.
x
is not in the menu. The options are…You have either made an incorrect selection, or you have forgotten to escape a metacharacter. For the former, you can select a new value from the list of valid values presented in the error message. For the latter, part of another value is being interpreted as a positional value for the parameter the task is complaining about.
Here it thinks that plot
is a positional value. Escape the "
to cure the problem.
Each NDF has an associated current co-ordinate system which is used when reporting positions within the NDF, or when obtaining positions from the user. If you want to either see, or give, positions in a different co-ordinate system, you need to change the current co-ordinate system (more often called the current co-ordinate frame) of the NDF by using command WCSFRAME. For instance,
will cause all subsequent commands to use pixel co-ordinates when reporting positions, or obtaining positions.
Certain combinations of magnetic tape produced on one model of tape drive but read on a different model seem to generate parity errors that are detected by the MAG_ library that FITSIN uses. However, this doesn’t mean that you won’t be able to read your FITS tape. The UNIX tape-reading commands seem less sensitive to these parity errors.
Thus you should first attempt to convert the inaccessible FITS files on tape to disc files using the UNIX
dd command, and then use the FITSDIN application to generate the output NDF or foreign format.
For example to convert a FITS file from device /dev/nrst0
to an NDF called ndfname, you might
enter
where file.fits
is the temporary disc-FITS file. The 2880 is the length of a FITS record in bytes.
Repeated dd commands to a no-rewind tape device (those with the n
prefix on OSF/1 and the n
suffix
on Solaris) will copy successive files. To skip over files or rewind the tape, use the mt command. For
example,
moves the tape on device /dev/rmt/1n
forward three files, then moves to the fourth file,
moves back two files on the default tape drive (defined by the environment variable TAPE
),
and
rewinds to the start of the tape on device /dev/nrmt0h
. Thus it is possible to write a script for
extracting and converting a series of files including ranges, just like FITSIN does.
If the above approach fails, try another tape drive.
If you attempt to read a FITS magnetic tape with FITSIN, you might receive an error like this
when you enter the device name. The magnetic-tape system uses an HDS file called the device dataset (DEVDATASET) to store the position of the tape between invocations of Starlink applications.
When FITSIN is given a name, the magnetic-tape system validates the name to check that it is a
known device. There should be a devdataset.sdf
file (within /star/etc
at Starlink sites) containing a
list of at least all the available drives at your site. What FITSIN is complaining about above, is
that the device you have given is not included in the DEVDATASET file. Now this might
be because you mistyped the device name, or that the particular device is not accessible
on the particular machine, or because your computer manager has not maintained the
DEVDATASET when a drive was added. You can look at the contents of the DEVDATASET with this
command.
Oh and one other point: make sure the tape is loaded in the drive. Yes this mistake has happened (not mentioning any names) and it is very hard to diagnose remotely.
There is a class of error that arises when an HDS file is corrupted. The specific message will depend on the file concerned and where in the file the corruption occurred. The most likely reason for file corruption is breaking into a task at the wrong moment, or trying to write to a file at the same time.
If you want to process simultaneously from different sessions—say one interactive and another in
batch—it is wise to redefine the environment variables ADAM_USER
, and AGI_USER
if you want graphics
on the same machine. The environment variables should point to a separate existing directory for each
additional session. This will keep the global and application parameters, and the graphics database
separate for each session.
The way to look for corrupted HDS files is trace them. Assuming that $ADAM_USER and $AGI_USER are defined,
traces the GLOBALS
file, the application you were running when the weird error occurred (here
ARDMASK), and the graphics database for machine cacvad
. Once you have identified the problem
file, delete it. If that proves to be the globals file, you might want to retain the output from Hdstrace,
so that you can restore their former values. Deleting the graphics database is something you should
do regularly, so that’s not a problem.
If you have been running Kappa from ICL, you will need to check of the integrity of the monolith parameter file, instead the individual parameter file. It will be one of these depending on the type of task that failed: graphics, NDF components, or the rest (mostly image processing) corresponding to these three monolith interface files.
If that doesn’t cure the problem, send a log of the session demonstrating the problem to the Starlink
Software support mailing list (starlink@jiscmail.ac.uk
), and we shall endeavour to interpret it for
you, and find out what’s going wrong.