Build managment with Delta-V

Jim Whitehead (ejw@ics.uci.edu)
Wed, 28 Jul 1999 19:49:29 -0700


From: Jim Whitehead <ejw@ics.uci.edu>
To: ietf-dav-versioning@w3.org
Date: Wed, 28 Jul 1999 19:49:29 -0700
Message-ID: <NDBBIKLAGLCOPGKGADOJAEENCCAA.ejw@ics.uci.edu>
Subject: Build managment with Delta-V


I'd like to throw this question out to the list and see what people think:

When using Delta-V for remote management of source code, how do you imagine
performing build management?

In my view, there are several choices based on where the compiler, source
code, and object files are located. In my view, the choices for each are
either local, a local hard drive, or remote, on a remote Delta-V server (or,
really far-out, perhaps on a proxy...)

Some points in this space:

1) source code local, compiler local, object files local

This is very CVS-like.  Before the compile occurs, the source code is
replicated to the local hard disk, and the local compiler works on this
code, producing object files which are also stored locally.  The object
files are not under SCM control.

2) source code remote, compiler remote, object files remote

In this model, you would submit the URL of a makefile, some parameters, and
a workspace to a remote build server, and the build server would then go off
and remotely compile the source code.  While the compiling machine probably
wouldn't be on the same machine as the DeltaV server, I can see it being on
the same storage area network as the DeltaV server's repository. In this
case, the object files would either be stored uncontrolled in a separate URL
hierarchy from the source code, or under CM control in the same URL
hierarchy as the soure code (want to see your object files -- just use your
workspace).

3) source code remote, compiler local, object files remote

In this model, the compiler works locally on Web resources that are
pre-fetched from the Delta-V server right before the compile.  After the
compile is done, the object file is put back up on a remote DeltaV server.
In this case, the object files don't have to be on the same server as the
source code -- it would be possible to store the object files on a different
server, and have links to them from the source code.  This might be an
advantage for handling object code for multiple platform variants.  Follow
link A for the Solaris object code, link B for Linux...

I can also envision projects having multiple opportunistic compile servers
that immediately start compiling as soon as a resource is modified.  The
compile server would either occasionally poll, or would subscribe to a
(non-DAV-transported) event stream of changes from the DeltaV server, and
would fire off a compile as soon as an object changed.

So, what are your thoughts?

- Jim