The process of adding a package to the build system, and autoconfing it, is reasonably mechanical. The main differences from the traditional build system are as follows.
There is now no ./mk
file, and so no platform configuration using the $SYSTEM
environment variable. Instead, all
platform dependencies should be discovered by the configuration process. It is only in rather extreme cases that
you will need to resort to platform-specific code, and that should be handled by the starconf macro
STAR_PLATFORM_SOURCES
.
You should try to avoid mentioning or referring to any specific platform when configuring. Test for features, working with those you find, and working around those you don’t; don’t test for platforms and believe you can reliably then deduce what features are available.
Traditional Starlink makefiles had two phases, ‘build’ and ‘install’ (plus the various export targets). These
makefiles often did on-the-fly editing of scripts as they were being installed, to edit in version numbers, or the
correct path to Perl, for example. There was also implicit configuration done in the ./mk
script, which specified
platform-specific compiler flags, or versions of ‘tar’.
GNU-style projects, on the other hand, have three phases, ‘configure’, ‘build’ and ‘install’, and source
file editing happens only in the first two – installation consists only of the installation of static
files. Most configuration editing happens at configure time, when .in
files are substituted with
static information, such as the absolute paths to programs, determined as part of configuration. In
the case where the substitution involves installation directory variables, GNU (and thus general)
conventions demand that this be done at build time, since these directories involve the $(prefix)
makefile
variable, and it is deemed legitimate for the user to specify a different value at build time (make
prefix=xxx
) from that specified or implied at configuration time. The user may then specify a different
prefix again at install time (make prefix=yyy install
) for the purposes of relocating or staging the
install, but this must not invalidate the value of $(prefix)
which may have been compiled into
applications. This is discussed in some detail in section 4.7.2 Installation Directory Variables of
the autoconf manual, but you generally do not have to worry about it, since it is rather rare in
practice that you have to compile installation directories into the applications and libraries that you
build.
In general, whereas traditional Starlink makefiles quite often performed spectacular gymnastics at install time, GNU-style makefiles generally do nothing at install time, other than occasionally adding extra material to the install via one of the installation hooks supplied by automake (see section What gets installed of the automake manual).
In the traditional build system, the master source was regarded as rather private to the developer who ‘owned’ the code, who was free to use whatever occult means they desired to produce the sources which were put into the three distribution objects, ‘export_source’ (just the source), ‘export_run’ (just the executable) and ‘export’ (both). Now, everything should be put into the CVS repository, including any code-generation tools either as separate packages or as local scripts, and it is this source set which is used for nightly builds and the like. If you need tools to generate some of the distributed sources, and they cannot be included in the package for some reason, they should be checked in, and your component should declare a ‘sourceset’ dependency on the required tool (see Sec. A.17).
With these remarks out of the way, the following is a description of the steps involved in bringing first an application into the new fold, and then a third-party component.
The example here shows the autoconfing of the adam library, chosen simply because it’s relatively simple.
Make a directory to hold the package, and add it to the repository
Get the complete set of source files, and check them in to the repository. This means unpacking all the files in
adam_source.tar
, which you can find in /star/sources/pcs/adam
(as it happens).
In this case, the adam_source.tar
distribution tarball is a suitable set of sources. This is not always true, since
some Starlink distributions – especially some of the larger libraries and applications – do quite elaborate
processing of their sources in the process of creating this ‘source’ tarball; in these cases, you should
attempt to obtain a more fundamental set of sources from the package’s developer (if that is not
you).
The ideal source for new code is a CVS or RCS repository. A CVS repository is easy to import – you just tar it up and unpack it into the correct place within the Starlink repository. An RCS repository is barely harder, with the only difference being that you have to create the Starlink repository directory structure by hand, and copy the RCS files into place within it. The only gotcha with this route is that you must make sure that the permissions are correct on the resulting repository: you must make sure that everyone who should have any access to the repository can read and write each of the directories.
CVS repository access is controlled by groups (at least when the repository is shared), and so each directory
within the repository must have a suitable group ownership, with group-write permissions; each
directory must also have the setgid bit set, so that any directories created within it inherit its gid.
Make sure you check this as soon as you put the repository in place, or else everyone will have
problems with the repository. Within the Starlink repository in particular, all participants are part of the
cvs
group. In short, you can set the correct permissions on an imported directory foo
with the
commands
This sets the group-owner of each directory to be cvs
, and sets the group-write and set-group-id bits in the
permissions mask (don’t tinker with file permissions, since these affect the permissions of the checked out files).
You need not worry about file or directory ownership, since this always ends up being the last person who
committed a file. Note: The instructions here are based on observation of CVS repositories; the actual
requirements don’t seem to be formally documented anywhere.
Add all of the source files, including files like the mk
script and the old makefile
, which we are about to remove.
Tag this initial import with a tag <component>-initial-import
, so that it is possible to recover this old-style
distribution if necessary. As mentioned above, it is not always completely clear what constitutes the old-style
source set: so don’t do this step mechanically, use your judgement, and above all avoid losing information. Note
also that some of the infrastructure libraries were added before we settled on this particular tagging practice,
and so lack such an initial tag.
In this present case, the adam_source.tar
file includes a Fortran include file containing error codes, adam_err
;
there are two problems with this.
Firstly, the filename should be uppercase: the file is generally specified within the program in uppercase, and it
should appear thus on the filesystem. The traditional makefile works around this by creating a link from
adam_err
to ADAM_ERR
, but this won’t work (and indeed will fail messily) on a case-insensitive filesystem like
HFS+, used on OS X.
The second problem is that this is a generated file, though the source is not distributed, is probably misplaced,
and in any case the generation was probably done on a VAX a decade ago. All is not lost, however. The
functionality of the VAX ‘message’ utility is duplicated in the application messgen, in a component of the same
name, along with an application cremsg which constructs a source file from this message file. Thus it is neither
adam_err
nor ADAM_ERR
which we should check in, but the source file adam_err.msg
which we reconstruct using
cremsg. Thus with error files, it is the (probably reconstructed) .msg
file which should be checked in to the
repository, and not the _err
, _err.h
or fac_xxx_err
files which you may have found in the old-style
distribution.
The file adam_defns
is similar, but this is genuinely a source file, so we need do nothing more elaborate
than rename it to ADAM_DEFNS
, then add it to the repository and remove the original lowercased
version. Many packages have one or two xxx_par
files, and these should be similarly renamed to
XXX_PAR
.
Note that CVS preserves access modes when it stores files, so we should make sure that the script
adam_link_adam
is executable before checking it in, and we need not bother to make it executable as part of
the build process. On the other hand, scripts which are substituted by configure do need to be
made executable explicitly, which you do by a variant of the AC_CONFIG_FILES
macro. The macro
invocation
substitutes foo
from source file foo.in
and then makes it executable. Note that the sequence
would not work, since this is one of the cases where autoconf macros do not simply expand inline to shell code. For further discussion, see the section Performing Configuration Actions in the autoconf manual.
Create files configure.ac
, Makefile.am
and component.xml.in
by copying the templates in the starconf
buildsupport directory (starconf –show buildsupportdata
); the fields in the component.xml.in
file are
discussed in Sec. 2.2.4. If you have an editor that can use it, you might also want to create a link to the DTD
used for the component.xml
file, which is in the same directory. Edit these files as appropriate,
using information in the original Starlink makefile for guidance (so it’s useful to keep a copy of the
original makefile
handy, rather than simply deleting it as illustrated above). Then check the files
in.
What edits should you make?
The adam Makefile.am
file looks as follows:
Four out of six of these variable declarations are variables meaningful to automake (see Sec. 2.1.2), and the
other two are simply copied from the original makefile
.
The adam configure.ac
looks like this:
The first five lines are straightforward boilerplate (see Sec. 2.1.1). The next three find a Fortran compiler, declare
that we want to use libtool to build our libraries, and finally that we wish the symbols in that library to be of the
sort that the CNF package is able to handle (see Sec. A.16). The ‘FC’ autoconf macros will search for a Fortran
compiler by looking for a f95 compiler before looking for a f77 compiler; if you know or discover this is
inappropriate, then you can constrain the Fortran dialect that AC_PROG_FC
will look for by giving a value for its
optional dialect argument. Macro AC_PROG_FC
is not yet documented in the autoconf manual, but see
Sec. B.
After that, we declare the dependencies. The dependencies you work out by any and all means you can. For a
library, the set of ‘build’ dependencies is determined by the set of components which supply files which the
code here includes. Grepping for all the Fortran INCLUDE
statements and all the C #include
directives is
a good start. For link dependencies, grepping for CALL
lines is useful for Fortran, and grepping
for
will probably be handy. In fact, the script
should give you the raw material for most of the dependencies you need. It doesn’t really matter too much if you get this wrong – you might cause something to be built slightly later than it might be in the top-level bootstrap, might cause some eventual user to have to download one package more than they have to, or might create a circular dependency and break the nightly build, in which case you’ll find out soon enough.
See Sec. A.17 for the description of the various types of dependencies you can specify here.
By the way, remember (because everyone forgets) that there are no components err
and msg
: all of the err_
and
msg_
functions are in the mers
component.
The next couple of lines tells you that we lied outrageously, above, when we were talking about .msg
files.
Though the remarks there are true enough in general, the ADAM_ERR
file is special, and doesn’t come from any
.msg
file. This is surprising enough that it’s worth making a remark to this effect in the configure.ac
file.
Finally, we list the files that should be configured. Essentially all starconf-style configure files should have at least these two files mentioned here.
Now run starconf. As described in Sec. 2.2, this adds some required files, and checks that the directory looks right. It will look something like this:
That complained that the file bootstrap wasn’t present, and then went on to install one for you; it listed a number of files which should be checked in; and it added a bootstrap script. The starconf application actually does the checking by running the starconf-validate script, which you can run yourself independently if you wish.
Now you have a bootstrap file, so run it:
The bootstrap script always re-runs starconf if it can, so this reminds you that you still haven’t
checked those files in. It also runs autoreconf (Sec. 2.1.4) for you, installing the helper files that
requires, and constructing configure
from configure.ac
and Makefile.in
from configure.ac
and
Makefile.am
.
Now, finally, you can try ./configure
and make
. That might just work.
Iterate until success.
When the code is working, you will might want to add it to the set of components which are explicitly built. To
do this, add it to the ALL_TARGETS
variable in the top-level Makefile.in
. Next, go to the parent directory of the
directory you have just added: the configure.ac
file there will almost certainly have a skeleton
configure.ac
which includes a AC_CONFIG_SUBDIRS
line, to which you should add your newly working
directory.
The previous section describes the steps required to bring a component into the build system. This section makes reasonably explicit what was implicit in that previous section, by describing the ‘interface’ between the build system and a particular component. The word ‘interface’ is in scare-quotes there because this interface isn’t formal and isn’t enforced, but it should be useful as a check-list and as an overview.
1. component.xml
: every component has to have an XML-valid instance of this, as defined by componentinfo.dtd
with element <component>
at the root level (see Sec. 2.2.4).
2. A Makefile which implements most/all of the GNU standard targets. I think we actually use ‘all’ (default),
‘install’, ‘check’ and ‘dist’ as part of the build system, and I’ve mentioned ‘clean’, ‘distclean’, and
‘maintainer-clean’ elsewhere in this document. The GNU coding standards add ‘uninstall’, ‘install-strip’,
‘mostlyclean’, ‘TAGS’, ‘info’, ‘dvi’, none of which we probably care much about (uninstalling should
probably be handled by whatever package-management tool we settle on, rather than a makefile). The
build system does cd xxx; make; make install
for component cpt
, where xxx
is the contents of
/componentset/component[@id=’cpt’]/path
in the componentset.xml
file. This requirement comes for free,
given that you use a Makefile.am
file and automake, and this requirement is mentioned only for
completeness.
3. Doing ‘make install’ additionally installs a $prefix/manifests/cpt
manifest, which is a valid instance of the
componentinfo.dtd
DTD with the <manifest>
element at the root level. Again, this step comes for free when
you use the automake system.
4. Bootstrapping: the top-level bootstrap script includes a call to ‘autoreconf’. This configures the whole tree,
using autoconf’s built-in support for recursing, based around the macro AC_CONFIG_SUBDIRS
, and you should
make sure that any new components are pulled in to this mechanism. Apart from that, the behaviour of the
./configure
scripts isn’t really part of this ‘interface’, except that components are linked in to the tree-wide
configure by virtue of being mentioned in the configure.ac
scripts in their parent (see the ‘bridging’ scripts in
applications/configure.ac
and libraries/configure.ac
). If, for some reason, you wished to incorporate a
large number of components in a new tree (perhaps you want to include some complicated tree of Perl modules,
for example), then you would similarly hand-write some ‘bridging’ scripts, which preserve the property that
your new tree is configured (doing whatever is required in a particular situation) when its parent
is.
The Makefile.in
files which automake generates generally handles distribution pretty successfully, and the
command make dist
will usually do almost all the work of packing your files into a distribution which can be
built portably. In some cases, however, you have to give it a little help.
There are two potential problems. Firstly, automake may not be able to accurately work out the set of files which ought to be distributed. Consider the following makefile fragment (this, along with the other examples in this section, is from the AST distribution, which presents a variety of distribution problems):
Automake packages anything mentioned in a _SOURCES
variable, so astbad.c
and pointset.h
are included in
the distribution automatically. However it does not attempt to work out every consequence of the makefile
rules, and so fails to spot that ast_par.source
is going to be needed on the build host. In general, a file which is
mentioned only in a makefile dependency will not be automatically distributed by automake. Files
such as this should be included by the distribution by listing them in the value of the EXTRA_DIST
variable:
Automake also supports the dist_
and nodist_
prefixes to automake variables. These can be used to adjust
automake’s defaults for certain primaries. Automake does not distribute _SCRIPTS
by default (this is since they
are sometimes generated, but I for one find this counter-intuitive), so if you want a script to be distributed, you
must use a prefix:
We see all four common possibilities here: ast_link_adam
and makeh
are needed in the distribution
but need no configuration, and so should be distributed as they stand; ast_link
and ast_cpp
are configured, so these files should not be distributed, since the corresponding .in
files are (as a
result of being mentioned in AC_CONFIG_FILES
). Also ast_cpp
and makeh
are used only to build the
library, and are not installed. You could get the same effect by listing ast_link_adam
and makeh
in
EXTRA_DIST
, but it is probably a little less opaque if all the information about particular files is kept in one
place.
Incidentally, the other occasionally important ‘prefix-prefix’ like dist
is nobase
. See Sec. 5.2.
On the other hand, files in _SOURCES
variables are distributed by default, so you must turn this off if one of these
files is generated at configure time:
Though the set of included files is deterministic, I find it is not terribly predictable, and the best way to do this
sort of tidyup is by making a distribution, trying to build it, and thus discovering which files were left out or
included by accident. There is no harm in listing a file in EXTRA_DIST
which would be included
automatically.
For fuller detail on automake’s distribution calculations, see section What Goes in a Distribution of the automake manual.
The second distribution problem is that some Starlink components do quite a lot of work at distribution time, building documentation or generating sources, generally using programs or scripts which are not reasonably available on the eventual build host. This is in principle out of scope for automake and autoconf, but since it is common and fairly standardised in Starlink applications, Starlink automake and autoconf provide some support for pre-distribution configuration.
All configuration tests in configure.ac
should be done unconditionally, even if they are only meaningful prior
to a distribution – they are redundant afterwards, but cause no problems. Any files which should be present
only prior to the distribution should be listed in configure.ac
inside macro STAR_PREDIST_SOURCES. The
./configure
script expects to find either all of these files or none of them, and if it finds some other number, it
will warn you. If it finds these files, it concludes that you are in a pre-distribution checkout, and sets
the substitution variable @PREDIST@
to be empty; if it finds none, it concludes that you are in a
distributed package, and defines @PREDIST@
to be the comment character #
. This means that makefile
fragments which are only usable prior to distribution should all be prefixed with the string @PREDIST@
,
and they will thus be enabled or disabled as appropriate. The distribution rules mentioned above
mean that any configuration of such undistributed files must be done by hand in the Makefile.am
,
and not by AC_CONFIG_FILES
, since this macro automatically distributes the files implied by its
arguments.
For example, the AST configure.ac
has:
The sun_master.tex
file is used when the SUN/210 and SUN/211 files are being generated, and should not be
distributed. Since the process of generating the documentation uses application star2html
, we check for this
and for prolat
(and thus do this redundantly even after distribution). Most components can get away with the
one-argument version of STAR_LATEX_DOCUMENTATION
which avoids these complications, and does the
equivalent of the star2html
check internally.
This AST configure script also has
and carefully avoids calling AC_CONFIG_FILES(error.h version.h)
. It still has to configure these files prior to
distribution, so this has to be done in the Makefile.am
:
(this is the same technique that was illustrated in passing in the discussion of ‘installation locations’ in
Sec. 2.1.2). The leading @PREDIST@
strings mean that this stanza causes no problems after distribution, when the
error.h.in
files are not present. See Sec. A.27 for more details.
This is admittedly a rather crude technique, but it is a lot less fragile than the more elegant alternatives.
In general, the details of making distributions are outside the (current) scope of this document. However there is one aspect we can usefully mention, since it interacts with the buildsupport tools which configure the distribution.
Normally, when you ./bootstrap
a directory, it arranges to install it by default in a location governed by
starconf (see Sec. 2.2 for details). Since this default is a location on your local machine, it might not be
appropriate for a distribution tarball. In that case, you will probably want to ensure that the built
distribution will install into /star
(or /usr/local
perhaps). There are multiple ways you can do
this.
If you are doing this for a complete tree, with the intention of building all the software and rolling distribution tarballs with the result, then the procedure you should use is as follows.
/stardev
but the default Starlink directory to remain /star
. Set these as the STARCONF_*
defaults.
If this directory really is just /star
then these settings could in principle be omitted and the variables
simply left unset, since /star
is the default for both, but declaring them explicitly avoids ambiguity. The
directory mentioned here need not exist or be writable.
/tmp/star
as an example. Since
this directory will receive the the buildsupport tools, and other tools built during the build, it will need to
be included in your PATH.
/star
.
STARCONF_*
defaults.
This will install the components into /tmp/star
, with later components using tools installed there earlier,
but with the ./configure
scripts within those components written so that they will still install in /star
by
default.
The manifests will end up in /tmp/star/manifests
.
After this process has completed, if you go to a built component and make a tarball using make
dist
, then this tarball will install in /star
by default, which you can check using ./configure
–help
.
If, on the other hand, you wish to create a distribution tarball of a single component, then you do not have to reconfigure the entire tree to do so (fortunately).
The more robust way is to create a special set of buildsupport tools which have the desired directory as their
default prefix. Say you want the distribution to install by default in /star
, but have these special buildsupport
tools installed in /local-star/makedist
(for example) . Do that as follows:
(or use env ...
on csh-type shells; you would typically specify STARCONF_DEFAULT_STARLINK
here, too).. Setting
the two STARCONF_*
variables is actually redundant, here, since both have /star
as their default, but it does no
harm and can make things usefully explicit.
At this point, you can check out the appropriate version of your software (probably via a CVS export of a
particular tag), put this /local-star/makedist/bin
in your path (rehashing if you are using csh), and call
./bootstrap; ./configure; make dist
as usual. The command ./configure –help
will show you the
default prefix for this ./configure
script.
An alternative way is to use the acinclude.m4
method described in Sec. 2.2.1, setting OVERRIDE_PREFIX
to
/star
(for example). When you run ./bootstrap
after that, the given prefix will be used instead of the prefix
baked into the installed buildsupport tools.
That is, to create a distribution of component foo
, located in libraries/foo
in the repository, you should tag
the source set with a suitable release tag, such as foo-1-2-3
, and then, in a temporary directory, do the
following:
This will leave a tarball such as foo-1.2-3.tar.gz
in the current directory.
Automake provides some simple support for regression tests. There is a (terse) description of these in the
automake manual, in the section Support for test suites, but it lacks any example. You run the tests with make
check
, after the build, but before the component is installed.
You can set up tests as follows.
The TESTS
variable lists a set of programs which are run in turn. Each should be a program which returns zero
on success, and if all the programs return zero, the test is reported as a success overall. If a non-portable test
makes no sense on a particular platform, the program should return the magic value 77; such a program will not
be counted as a failure (so it’s actually no different from ‘success’, and the difference seems rather pointless to
me). A PROGRAMS
‘primary’ (see Sec. 2.1.2 for this term) indicates that these are programs to be built, but
the ‘prefix’ check
indicates that they need be built only at ‘make check’ time, and are not to be
installed.
The SOURCES
primary is as usual, but while the test2
program is standalone (it’s not clear quite how this will
test anything, but let that pass), the test1
program needs to be linked against two libraries, presumably part of
the build. We specify these with a LDADD
primary, but note that we specify the two libraries which are
actually under test as two libtool libraries, with the extension .la
, rather than using the -lemsf
-lems ‘cnf_link‘
which ‘ems_link‘
uses as its starting point (this example comes from the ems
component). That tells libtool to use the libraries in this directory, rather than any which have been
installed.
The fact that test programs must return non-zero on error is problematic, since Fortran has no
standardised way of controlling the exit code. Many Fortran compilers will let you use the exit
intrinsic:
to return a status. Since this is test code, it doesn’t really matter that this might fail on some platforms, but if this worries you, then write the test code as a function which returns a non-zero integer value on error, and wrap it in a dummy C program:
If the tests you add use components other than those declared or implied as component dependencies (see
Sec. A.17), then you should declare the full set of test dependencies using STAR_DECLARE_DEPENDENCIES([test],
[...])
.
NOTE: this section is subject to a little change. The guidance in this section is to some extent subject to change as we get more experience including third-party sources into the tree.
If an application needs to rely on a non-Starlink application, and especially if it relies on a modified
version, then the sources for that application should be checked in to the thirdparty/
part of the
tree.
There are two stages to this. Firstly we have to import a distribution version of the software, and secondly we have to bring it in to the Starlink build system.
The example here is the GNU m4 distribution, which is one of the buildsupport tools necessary on Solaris, which has a non-GNU m4 (autoconf relies on language extensions in GNU m4, which is therefore required). The first step is rather mechanical, the second better introduced by example.
First, we get and import the sources (see the fuller details in the CVS manual):
The -ko
option turns off any keyword expansion for the newly-imported files. Thus they will retain the
values they had when they were imported. This appears to be the practice recommended by the
CVS manual, though it is not absolutely clear that it is best, and this should not be taken as a firm
recommendation.
The thirdparty/fsf/m4
argument is the location of the new component within our repository, the path to
which will be created for you if necessary. FSF – the ‘Free Software Foundation’ – is the ‘vendor’ in this case,
and this is used for the location within the thirdparty/
tree as well as the next CVS argument.
CVS uses this vendor argument as the name of the branch this new import is nominally located
on.
Finally, this import command tags the imported files with the tag you give as the last argument. This should use the same convention as other tags within the Starlink repository, namely the component name and version number.
Note that we import a distribution tarball of the source, namely one including the configure
script and any other
generated files. This is because we cannot reliably bootstrap the original sources if they, for example, and also so
that we can reliably track any future releases of the tarball. Even if the component has a public CVS archive,
resist the temptation to import a snapshot of that.
Now that the source set is in the repository, you can go back to your checkout tree and check the new component out:
We see that the newly-imported files have been put on the 1.1.1 branch, named FSF
. Any files we add to this
component, and any files we modify, will go on the trunk.
What you do next depends on how easy it is to configure the new component. We’ll look at three examples, the
m4
component, containing the GNU version of m4, the cfitsio
component, containing the HEASARC FITS
library, and the tclsys
component, containing a distribution of Tcl.
m4
configuration is quite regular (as befits a core GNU component), and so admits
of reasonable adaptation in place, by simply editing the configuration files included in the
distribution (and then, despite what we said above, regenerating the distributed ./configure
script.
cfitsio
distribution comes with its own configure scripts, but they are generated using a
rather old version of autoconf, so we want to avoid touching them. However the build is basically
rather simple, and only a few built files need to be installed. This component shows how to wrap
a distribution’s own configuration in a straightforward way.
tclsys
component, on the other hand comes with its own fearsomely intricate configuration
and installation mechanism, which we want to disturb (or indeed know about) as little as possible.
This section shows how to wrap such an installation in the most general way.
The first thing to do is to add the component.xml.in
file:
after which we edit component.xml.in
appropriately. This editing turns out to be slightly more intricate than
usual: the configure.in
file is old-fashioned enough that it does not define the substitution variables
that component.xml.in
is expecting, and it requires a slightly closer inspection of configure.in
to determine what these should be (@PRODUCT@
and @VERSION@
in this case). Although this step
seems redundant, it is best to have this file configured rather than completely static, so that the
generated component.xml
will remain correct when and if a new version of the ‘m4’ component were
imported. Having said that, ensure that the <bugreports>
element in the component.xml.in
file
points to a Starlink address – we don’t want bugs in our modifications to be reported to the original
maintainers.
Note that it is the now-deprecated configure.in
file that is the autoconf source in this component, and that it
has now-deprecated syntax; there is no need to update this. When we run autoreconf, we discover that this is
not the only obsolete feature, and we have to do some futher mild editing of configure.in
before it is
acceptable to the repository version of autoconf, though it still produces a good number of warnings. These
don’t matter for our present purposes: the ./configure
which autoreconf produces (and which we commit) still
works, and the component builds and runs successfully. If the configure.in
were old enough that our current
autoconf could not process it at all, then we might consider retrieving and temporarily installing an older
version of autoconf – the ./configure
script includes at the top a note of the autoconf version which produced
it.
In order for the new component to be a good citizen in the Starlink build tree, it needs to install a file manifest as
part of the install
target. Add to the configure.in
file the lines
If we were using Starlink automake, this would be enough to prompt it to include support for installing a
manifest in the Makefile.in
it generates. The GNU m4 distribution does not, however, use automake, so we
need to add this by hand. The following additions to Makefile.in
do the right thing, though they are rather
clumsier than the support that Starlink automake adds:
(if you look at thirdparty/fsf/m4/Makefile.in
you will see that we actually need to have PRODUCT
and
VERSION
there instead of the more general PACKAGE_NAME
and PACKAGE_VERSION
above). Note that the actual
installation happens within the rule for install-manifest.xml
– this guarantees that the $(prefix)
stored in
the manifest is the same as the $(prefix)
actually used for the installation.
Since we are creating this manifest entirely by hand, we are responsible for ensuring that the resulting file
conforms to the DTD in componentinfo.dtd
(in the starconf buildsupportdata
directory).
We add the component.xml.in
file to the repository, and look at what we have.
This tells us that configure
, configure.in
and config.h.in
have been modified, and component.xml.in
added, but we also discover that files stamp-h.in
, doc/stamp-vti
and doc/version.texi
have also been
modified, as part of the regeneration of the ./configure
script. The modification to the stamp file stamp-h.in
we should probably commit, to avoid dependency niggles in the future, but the meaningless changes to the two
doc/
files we can just discard:
We can see that the changes to configure.in
are now on the trunk for this component, rather than the FSF
branch.
The cfitsio
component wraps the HEASARC FITS library of the same name, and installs from it the library
itself, plus a few header files. Although we probably could adapt the library’s distributed autoconf script, that
script is sufficiently old, and there are sufficiently few components installed, that it is less trouble to simply
wrap this configuration in another one.
The cfitsio
component contains a cfitsio/
directory containing the distributed library source. Along with it
are the usual component.xml.in
file, copied from the template in ‘starconf –show buildsupportdata‘
, and
Makefile.am
and configure.ac
files, which are too different from the template ones for them to be
helpful.
The configure.ac
looks like this:
Note that we must use STAR_DEFAULTS
, and we must not use AC_CONFIG_SUBDIRS
, for the reason described in
the comments above, even though configuring the subdirectories is exactly what we want to do. This particular
combination of commands is sufficient to configure the cfitsio
distribution – you might need to do
different things for other third-party packages. That’s all the configuration we need; how about the
Makefile?
The Makefile.am
looks like this:
(if you look in the cfitsio
component, you will see that the actual Makefile.am
does a little more than this, and
is rather more copiously commented, but these are the important parts).
This uses a number of useful automake features. First of all, the assignments which control what is installed –
the contents of the _LIBRARIES
, _HEADERS
and (via EXAMPLE_SOURCES
) _DATA
primaries – refer to files within the
cfitsio/
subdirectory. When these are installed, the path to them is stripped, so that libcfitsio.a
, for
example, ends up installed in .../lib/
and not in .../lib/cfitsio/
, as you might possibly intuit.
This is the useful behaviour which is turned off by the use of the nobase_
prefix, as described in
Sec. 5.2.
Second, the presence of the lib_LIBRARIES
line means that automake expects to find some library sources
somewhere, and if it is not told where they are with a xxx_SOURCES
variable, then it will assume
a default based on the library name. In this case, of course, the sources are not in this directory,
and we do not wish automake to generate rules to build them, so we have to pacify it by giving a
value for the cfitsio_libcfitsio_a_SOURCES
variable (note the canonicalisation of the library
name).
Each of the cfitsio/
targets is built in the same way, by switching to the cfitsio/
directory and making ‘all’
using that directory’s own Makefile. The only remaining wrinkle is that it turns out we have to specify the
cfitsio/libcfitsio.a
target by itself, rather than as part of the previous compendium target. Due, probably,
to a bug, automake does not appear to ‘notice’ the library if it’s mentioned in the compendium target, and
instead generates its own conflicting target.
The tcl
component (like the parallel tk
component) is paradoxically easy to configure, partly because its
distributed ./configure
script is generated by a version of autoconf too old for us to handle directly, which
means that we want to avoid touching it as much as possible, but also because the Tcl distribution has an
installation mechanism which is too complicated to be handled by the simple scheme in the previous cfitsio
section. The method we use instead, as described below, actually looks simpler, but you should probably not
resort to it unless necessary, since the starconf macro is uses – STAR_SPECIAL_INSTALL_COMMAND
– is a
dangerously blunt instrument.
After being unpacked and imported as described above, the top-level of the checked out Tcl distribution looks like this:
The way that the distributed README
tells us to compile Tcl is to change to the unix/
subdirectory, and type
./configure; make
. We want to create files Makefile.am
and configure.ac
in this directory, which handle this
for us.
We add a component.xml.in
as before, plus non-standard configure.ac
and Makefile.am
files. The
configure.ac
should look like this:
This looks rather like the cfitsio
example, except for the addition of the new macro, which uses a
parameterised version of the distribution’s own install command to make the installation into the Starlink
tree.
As described in Sec. A.29, this adapts the standard Makefile install
target so that it uses the given command
to make an installation. Note that the version of autoconf which generated the Tcl Makefile.in
template was
one which used the INSTALL_ROOT
variable instead of the DESTDIR
variable used by more modern versions, and
so we have to adjust this in this command line. This DESTDIR
variable is important, as it is used
during the installation of the component, to do a staged installation, from which a manifest can be
derived.
This staging step is important, and is rather easy to get wrong. If you do get it wrong, then the installed
manifest will be wrong, probably causing problems later when it comes to deinstalling or packaging this
component. The role of the INSTALL_ROOT
variable above was discovered only by inspecting the
Makefile.in
in the Tcl distribution: if there were no such facility, we could (probably) fake much of it
by using something like $(MAKE) prefix=$$DESTDIR$$prefix install
in the installation macro. The
distributed Tcl Makefile appears to respect the variable, but it is important to check for errors such as
making absolute links, or using the current directory incautiously (search for LN_S
or ‘pwd‘
in the
Makefile.in
).
The Makefile.am
file is even simpler:
After writing the Makefile.am
and configure.ac
files, all you need to do is run starconf to create a
./bootstrap
file, and check in the files which starconf suggests.
When importing third-party code that includes pre-created configure
scripts, Makefile.in
etc., that you do not
want to re-generate automatically, because they have inconsistent timestamps – that is say a Makefile.in file is
older than an associated Makefile.am, because the initial import was in the wrong order or too quick (file
systems typically resolve at one second intervals), then you may need to re-establish the time ordering of some
files, but only if the imported build system is sensitive to these changes. The way to do this is by creating a file
bootstrap_timeorder
that contains a list of relative file names (to the directory containing the bootstrap
script), in the order from oldest to newest. Look for an example of one of these files in the source
tree.
There is fairly complete support for documentation within the build system. Most Starlink documentation is in
the form of LATEX files respecting certain conventions (see SGP/28: Writing Starlink documents and SUN/199:
Star2HTML for some further details), but a small amount is in a custom XML DTD, converted using the kit
contained in the sgmlkit
component, documented in SSN/70.
The simplest case is where you have one or more .tex
files in the component directory (that is, the directory
which contains the component.xml
file). In this case, you simply declare the document numbers, and the files
named after them, in a STAR_LATEX_DOCUMENTATION
macro, as in:
This will put these document codes into the substitution variable @STAR_DOCUMENTATION@
, which you can use to
substitute in component.xml.in
, and it will arrange to compile the LATEX; files sun123.tex
and sc99.tex
. This
macro is documented fully in Sec. A.21.
The final step in this case is to declare that the documentation is to be built and installed in the
Starlink documentation directory. You do that by including the following line in the component’s
Makefile.am
:
(see Sec. A.32). This substitution variable expands to the default list of documentation targets (at present
comprising .tex
, .ps
and .htx_tar
, but subject to change in principle).
The next simplest case is where the source files are in a different directory from the component.xml
.
Indicate this by appending a slash to the end of each of the document codes located elsewhere. For
example:
This will behave as above for the sc99.tex
file, but differently for the sun123
document. That code, sun123
is
still included in @STAR_DOCUMENTATION@
, but it is the variable @STAR_LATEX_DOCUMENTATION_SUN123@
which is
set to the default target list, and not @STAR_LATEX_DOCUMENTATION@
. Documentation specified in this way does
not – contrary to appearances – have to live in a similarly named subdirectory of the component
directory; instead you will have to separately make sure that the documentation is built at build
time.
Suppose, for example, you are managing the documentation for the package which contains SUN/123, and
suppose the source for this is in a subdirectory docs/sun-cxxiii
(whimsy is a terrible thing), then you would
still refer to the documentation via STAR_LATEX_DOCUMENTATION(sun123/)
as above, but in the file
docs/sun-cxxiii/Makefile.am
you would include at least the line
and make sure that directory docs/sun-cxxiii
is built, and its contents installed when appropriate, by the
usual method of mentioning the directory in a SUBDIRS
declaration in the component Makefile.am
:
The datacube
component uses this mechanism to build its documentation.
If you need to add extra files to the .htx
tarball, you can do so via the .extras
file described in
Sec. A.21.
Sometimes you need to do more elaborate things to build your documentation – the ast
component is a good
example, here. In this case, you can give a non-null second argument to the STAR_LATEX_DOCUMENTATION
macro,
giving a list of makefile targets which should be built. The document codes in the first argument are
still included in @STAR_DOCUMENTATION@
, but the second argument is included verbatim in
@STAR_LATEX_DOCUMENTATION@
without any defaulting. You are responsible for adding the given targets to the
Makefile and, as before, you should add @STAR_LATEX_DOCUMENTATION@
to the stardocs_DATA
Makefile
variable.
If you have produced XML documentation, use the STAR_XML_DOCUMENTATION
macro, which is closely
analogous to the LATEX one.
Though most applications and libraries have some documentation associated with them, some documentation is not associated with any particular software, and so lives in a component by itself. These components are principally the various cookbooks and system notes, though there are a few SUNs in this category as well. Though the correct way to handle such components should be fairly clear from the discussion above, it seems worth while to make a few remarks here to clear up some ambiguities.
Like any other component, a documentation-only component must have a bootstrap file, and Makefile.am
,
configure.ac
and component.xml.in
files, as described in Sec. 4.1; however these will typically be rather
simple.
We can look at the SC/3 configuration files for an example.
The configure.ac
file, shorn of most of its comments, looks like this:
Note the docs-only
option to the STAR_DEFAULTS
macro (see Sec. A.18 for discussion). This particular
configure.ac
file is to handle document SC/3.2 – this second edition of the document is represented by the
AC_INIT
macro having a ‘version’ number of ‘2’.
The Makefile.am
file has some mild complications:
This is the complete SC/3 file. The only thing that is always required in these docs-only Makefile.am
files is the
stardocs_DATA = @STAR_LATEX_DOCUMENTATION@
line, which has exactly the same function described in
Sec. 4.6 above. However cookbooks like SC/3 often have both sets of examples (in this case, example scripts)
and some automatically generated documentation, and this example illustrates how to describe both of these.
The starexamples_DATA
line specifies a file which is to be installed in the examples directory (typically
/star/examples
; see Sec. A.32), and the Makefile.am
therefore provides the rule for generating that tarball of
scripts.
In addition, the sc3.tex
file includes (in the TEX sense) a generated file sc3-scripts.tex
, which documents
these various scripts, and so we include in this Makefile.am
the rule for generating this file from the contents of
the scripts/
directory. We must also indicate that the sc3.htx_tar
file depends on this generated file, and we
do this by including that dependency (without the build rule, which is generated automatically) in this
Makefile.am
file.
Finally, note that this SC/3 directory uses the .htx_tar.extras
mechanism which is described rather in passing
in Sec. A.21.
Whichever type of component you have added, the final step, after committing your changes, is to ensure that
your new component and its newly-declared dependencies are integrated into the network of dependencies
contained within the Makefile.dependencies
file at the top level. To do this, you must go to the top level of a
full checkout of the repository, make sure your new component is checked out there, then delete
Makefile.dependencies
and remake it.
The Makefile.dependencies
file is generated using a Java program, so you must have a JDK in your path
before starting this procedure (you might possibly need to run ./configure –no-recursion
to bring the
Makefile
there up to date with respect to Makefile.in
). Alternatively, you can invoke make with make
JAVA=/path/to/jdk Makefile.dependencies
.
Double-check both Makefile.dependencies
and componentset.xml
, using cvs diff <file>
: ensure that the
right material has been added before re-committing these two files. If this diffing appears to indicate that
material has been removed from either file, this probably means that you don’t have a full or up-to-date
checkout, so investigate that and fix things up before committing.
If the component you have added should be included in the ‘make world’ build, then you should add it to
the list of targets listed in the ALL_TARGETS
variable at the top of the top-level Makefile.in
. You
should do this only if both this component and anything it depends upon build successfully from
scratch.