Re: Build managment with Delta-V

Geoffrey Clemm (geoffrey.clemm@Rational.Com)
Wed, 11 Aug 1999 17:37:26 -0400


Message-ID: <015a01bee441$b4bc7790$0d1b15ac@aftershock.atria.com>
From: "Geoffrey Clemm" <geoffrey.clemm@Rational.Com>
To: <ietf-dav-versioning@w3.org>
Date: Wed, 11 Aug 1999 17:37:26 -0400
Subject: Re: Build managment with Delta-V

Build management (and I agree that it is better called
"derived resource management") is certainly a topic near and dear
to both my company (see: ClearMake) and me personally (see: Odin).
I believe though, that the issues of derived resource management can be
dealt with relatively orthogonally to versioned resource management
(assuming one is suitably careful in how one defines versioning
support :-).

One preliminary way of being "careful" is to reserve the term
"derived resource" for those resources that are mechanically
computed from other resources (commonly called "source
resources").

So although I will eagerly volunteer to participate in any work
in this area, I would not advocate this work taking place as
part of the versioning work.

Some comments on the current thread below...


From: Sean Shapira <sds@jazzie.com>


>> Jim Whitehead wrote:
>> > 2) source code remote, compiler remote, object files remote
>> >
>> > In this model, you would submit the URL of a makefile, some
>> > parameters, and a workspace to a remote build server, and the build
>> > server would then go off and remotely compile the source code.
>
>To which Ken Coar responded:
>> In ye olden daze (before the onset of ye greye haires), this was
>> called 'batch processing.'  [...]  been there, done that, dropped
>> my obligatory box o' cards.
>
>Normally I would agree with this sentiment, as it facilitates a
>less restrictive (and thus more efficient) development environment.
>But I fear it doesn't always work, and Delta-V needs to allow
>support for the cases where it doesn't.

I'd go even farther.  The Web is *all about* remote processing.
The difference is that it is not "batch" processing, but interactive
fine-grained remote processing.  So I believe that case 2 is
not only a necessary case, but that it is the primary case.

It is certainly true that some derivation processes are somewhat
expensive (such as compiling) so that you'd like to apply some
local processing power (if it is available).  Also, you'd like to be
able to perform these derivations even when you are disconnected
from the Web.


>> I like the CVS model, which would have me replicate the source
>> to my workspace, perform the build, and then check in any
>> updated results as appropriate.
>
>To my eye Ken has elided an important step:  build verification
>testing.  In a BVT the user at a minimum makes sure the derived
>object is not totally broken (e.g.:  an executable that the loader
>will fail to load).  Ideally the BVT would include a complete
>regression test suite, run on all supported target platforms.

Yes, this kind of functionality is unlikely to be easily replicated on
every client (especially the cross-platform test runs :-).

>It seems likely the representatives of sophisticated versioning
>systems (Atria was mentioned in a prior message) know about
>these needs, and have voiced them in design team discussions.
>True?

Yes, one of the most "challenging challenges" of the protocol design
was to provide a single protocol that supports lightweight clients
(where most/all processing occurs on the server) and heavyweight
clients (where significant processing occurs on the client).  In particular,
the current "workspace" resource type has been carefully designed to
be suitable for both situations.

Cheers,
Geoff