4 Science pipeline processing of SCUBA-2 data

 4.1 Running the science pipeline at your home institution
 4.2 Pipeline products
 4.3 Calibration
 4.4 Customizing the map-making
 4.5 Running the science pipeline at EAO/JCMT
 4.6 Processing examples

4.1 Running the science pipeline at your home institution

If the data for your project have been downloaded from CADC and placed in a single directory, the easiest procedure is to create a text file containing the name of each of these raw files. That file should contain either the full path to each file or relative to the current directory (or the directory defined by ORAC_DATA_IN). The data can be processed with the commands:

  % oracdr_scuba2_XXX -cwd
  % oracdr -loop file -files <list_of_files>

where XXX is the wavelength (450 or 850). An optional UT observation date in the form YYYYMMDD may be given (e.g. 20100301). If the date is omitted, the current date is assumed - however, the file naming convention uses the date on which the data were taken. The initialization command only needs to be run once per UT date, and may be given the -honour flag to use existing definitions of the relevant environment variables. Alternatively, the -cwd flag may be given to force the pipeline to use the current working directory for all input and output.

Note that there is no need to uncompress the data files prior to running the pipeline: Orac-dr can accept files compressed with gzip (ending .sdf.gz) and will uncompress them itself. However, be aware that the uncompressed files are not deleted at the end of processing (Orac-dr does not delete raw data).

Each observation is processed separately and the images combined to form a single output image. If the list of files contains data from multiple sources, the pipeline will treat each source separately and create different output files accordingly. Calibration is handled automatically (see 4.3 below).

The default science recipes will display the individual observation images plus the final coadded image using Gaia. (The display can be turned off, if desired, by adding -nodisplay to the Orac-dr command line.)

4.2 Pipeline products

The science data products from the pipeline have a suffix of _reduced. The files beginning with s are the products from individual observations; the files beginning gs are the coadded observations for a single object. The products from non-science observations may have different suffices, and may be three-dimensional cubes. See the documentation on the individual recipes in Appendix F for further details on those products.

In addition to the data files, the reduced products have PNG format images 64, 256 and 1024 pixels on a side for easy viewing in an image viewer or web browser.

4.3 Calibration

If no calibration observations are available, and unless otherwise instructed, the pipeline will apply standard flux conversion factors (FCFs) to calibrate the images in mJy beam1. Currently these are 537000 mJy beam1 at 850 μm and 491000 at 450 μm. (See also [3].)

4.4 Customizing the map-making

The pipeline uses the Smurf dynamic iterative map-maker (makemap) to create maps from raw data. A detailed description of the map-maker is given in SC/21 and [2]. The map-maker uses a configuration file to control the data processing which may be modified by users with advanced knowledge of the map maker. The SCUBA-2 pipeline may be given the name of an alternative or customized configuration file via the recipe parameter capability of Orac-dr. A number of pre-defined configuration files exist in the directory $STARLINK_DIR/share/smurf.

Once a suitable configuration file has been created, add its name to a recipe parameter file as follows:

  [REDUCE_SCAN]
  MAKEMAP_CONFIG = myconfigfilename.lis

and add -recpars recipe_params.lis to the command line when running the pipeline, where recipe_params.lis is the name of recipe parameter file (which must be in the current directory if the path is not given). The makemap configuration file must exist in the current working directory or one of the directories defined by the environment variables MAKEMAP_CONFIG_DIR, ORAC_DATA_OUT, or ORAC_DATA_CAL or in $STARLINK_DIR/share/smurf. Each directory is searched in this order and the first match is used.

Note that if running a recipe other than REDUCE_SCAN (such as one of the dedicated JLS recipes) that recipe name should be placed in square brackets instead.

4.5 Running the science pipeline at EAO/JCMT

The raw data are stored at EAO in the same way as at the summit. (It is also possible to do this yourself at your home institution but in general will not be worth the effort: use the example above instead.) The machine sc2dr5 is available at the summit for general user data processing.

If processing data from a single night, then Orac-dr can be run with the -loop flag option to indicate that the pipeline should examine the contents of flag files (which end in .ok). The flag files contain the path to the files to be processed, and have a fixed naming convention so the pipeline can recognize them. Use the -list option to specify the observation numbers to be processed (otherwise the pipeline will begin at observation 1). The command scuba2_index will produce a summary of the available data.

If processing data from multiple nights, create a text file with the names of the relevant data files, as for running at your home institution above, and follow the same procedure.

4.6 Processing examples

To process a set of data downloaded from the JCMT archive at CADC, where the files to be processed have been listed in a text file called mydata.lis:

  % oracdr -loop file -files mydata.lis

To process all files starting at observation 21 (skipping non-existent files) until there are no more files:

  % oracdr -loop flag -from 21 -skip

To process the files from a list of observations (e.g. 21, 22, 28, 29 and 30):

  % oracdr -loop flag -list 21,22,28:30

Note the use of a colon to specify a contiguous range of observation numbers.

Two additional options are useful when running on a remote machine or when an X display is not available: