Running Starlink tasks from a script is much the same as running them interactively from the shell prompt. The commands are the same. The difference for shell use is that you should provide values on the command line (directly or indirectly) for parameters for which you would normally be prompted. You may need to rehearse the commands interactively to learn what parameter values are needed. Although there is less typing to use a positional parameter for the expression, it’s prudent to give full parameter names in scripts. Positions might change and parameter names are easier to follow. Cursa is an exception. For this package you should list the answers to prompts in a file as described in Section 12.8.
The script must recognise the package commands. The options for enabling this are described below.
Then you can run Starlink applications from the C-shell script by just issuing the commands as if you
were prompted. You do not prefix them with any special character, like the %
used throughout this
manual.
If you already have the commands defined in your current shell, you can source your script so that it runs in that shell, rather than in a child process derived from it. For instance,
will run the script calledmyscript
with argument test
using the current shell environment; any
package definitions currently defined will be known to your script. This method is only suitable for
quick one-off jobs, as it does rely on the definition aliases being present.
The recommended way is to invoke the startup scripts, such as kappa, ccdpack
within the script. The
script will take a little longer to run because of these extra scripts, but it will be self-contained. To
prevent the package startup message appearing you could temporarily redefine echo as shown
here.
If you run simultaneously more than one shell script executing Starlink applications, or run such a
script in the background while you continue an interactive session, you may notice some strange
behaviour with parameters. Starlink applications uses files in the directory $ADAM_USER
to store
parameter values. If you don’t tell your script or interactive session where this is located, tasks
will use the same directory. To prevent sharing of the parameter files use the following
tip.
This creates a temporary directory (/user1/dro/vela/junk_$$
) and redefines $ADAM_USER
to point to it.
Both exist only while the script runs. The $$ substitutes the process identification number and so makes
a unique name. The backslash in \rm overrides any alias rm
.
If you are executing graphics tasks which use the graphics database, you may also need to redefine
$AGI_USER to another directory. Usually, it is satisfactory to equate $AGI_USER
to the $ADAM_USER
directory.
In a typical script involving Starlink software, you will invoke several applications. Should any of them fail, you normally do not want the script to continue, unless an error is sometimes expected and your shell script can take appropriate action. Either way you want a test that the application has succeeded.
If you set the ADAM_EXIT environment variable to 1
in your script before calling Starlink applications
then the status
variable after each task, will indicate whether or not the task has failed, where 1
means failure and 0 success.
The NDF allsky is absent from the current directory, so stats fails, reflected in the value of status
,
whereas $KAPPA_DIR/comwest does exist.
Here’s an example in action.
The script first switches on the ADAM_EXIT facility. A little later you create an NDF represented
by $ndfgen
and then compare it with the input NDF $ndfin
using normalize. If the task
fails, you issue an error message and move to a block of code, normally near the end of
the script, where various cleaning operations occur. In this case it removes the generated
NDF.
When normalize terminates successfully, the script accesses the output parameters for later processing with parget. This is explained in Section 9.