At the summit the pipeline is normally started by the telescope support specialist (TSS) as normal user accounts do not have the access privileges to write to the data output directories.
There are four pipelines running at the telescope, a QL and summit version for each wavelength. Each
pipeline runs on a separate data reduction (DR) machine (sc2dr# where # is 1–4). Raw data are stored
in /jcmtdata/raw/scuba2/sXX/YYYYMMDD, where sXX is the subarray and YYYYMMDD is the current UT
date. Reduced data are written to
/jcmtdata/reduced/dr#/scuba2_XXX/YYYYMMDD where dr# is the number of the machine running the
pipeline and XXX is either 850 or 450. The directory /jac_sw/oracdr-locations contains files that list
the locations of the output directories for each pipeline (and therefore which DR machine is processing
which pipeline). Note that the output directories are local to their host computers (though they are
NFS-mounted by the other DR machines).
Each pipeline waits for new data to appear before processing, and processes all data automatically choosing the correct recipe based on the observation type (which may be modified by the particular pipeline being run).
DRAMA must be running on the QL DR machines, and the DRAMA task names must be defined. The task names are communicated through the ORAC_REMOTE_TASK environment variable, which contains a comma-separated list of names. The usual form of an individual task name is TASK@server, e.g., SC2DA8D@sc2da8d. The task name is in upper case; the machine name serving the parameter in lower case.
The QL pipeline is started with the following commands (substitute 850 for 450 for the short wave pipeline in this and the summit pipeline examples):
The QL pipeline is fed data via DRAMA parameters and must be told the names of the tasks to expect data from, as described above. QL-specific recipes will be used if present. A stripchart program, which plots a number of quantities derived by the QL pipeline as a function of time, is made available once the QL pipeline has been initialized. Type xstripchart to run. (Note that the stripchart is a separate task and is not part of the pipeline itself.)
The summit pipeline is started by:
The summit pipeline reads the data files from flag files, and skips non-existent observations. Summit-specific recipes will be used if present. Should the pipeline need restarting, the -from argument must be given to tell the pipeline the observation number it should begin processing.