BATCH operation [was Re: Comments on draft-ietf-deltav-versioning-08]

From: Ross Wetmore (rwetmore@verticalsky.com)
Date: Mon, Sep 25 2000

  • Next message: Ross Wetmore: "Sharing version selectors [was Re: Comments on draft-ietf-deltav-versioning-08]"

    Message-ID: <39CF8434.C80529BC@verticalsky.com>
    Date: Mon, 25 Sep 2000 12:58:28 -0400
    From: Ross Wetmore <rwetmore@verticalsky.com>
    To: ietf-dav-versioning@w3.org
    Subject: BATCH operation [was Re: Comments on draft-ietf-deltav-versioning-08]
    
    
    I do not believe a BATCHed atomic operation is anymore limited to 
    sophisticated database systems than the current set of operations
    are, or at least that was not the connotation I had intended, i.e. the
    extreme case. I would not like to see this used as a cop-out to avoid
    the issue of how to deal with compound operations on the server as
    opposed to from the client.
    
    The point I wished to make is that WebDAV versioning extensions have
    broken down execution into what is perceived to be an elementary set
    of primitive operations. But most versioning systems implement implement
    user operations that are a combination of several primitives. To try
    and handle the typical situation in any reasonably sophisticated 
    versioning system, or even one that is not quite aligned to the underlying
    model being developed here, some mechanism is needed to move some of the 
    logic for conditionally executing a set of operations to the server as a 
    single network transaction. Leaving the onus of dealing with these 
    issues solely to the client to be run over a slow, potentially flaky 
    network may severely impact the utility and capability of the WebDAV
    extensions.
    
    If I used the term "server BATCH scripting capability" to describe 
    the concept, with only a couple of simple server operations defined for
    a starter, rather than a full-blown multi-threaded atomic execution
    engine, would this help move the discussion forward?
    
    In its simplest form this might simply be the ability to nest successive
    operations. An error return from any nested operation would be treated
    as an error in the post-condition of the preceding operation, and cause
    whatever rollback was appropriate for that operation.
    
    Cheers,
    RossW
    
    "Geoffrey M. Clemm" wrote:
    > 
    >    From: Ross Wetmore <rwetmore@verticalsky.com>
    > 
    >    "Geoffrey M. Clemm" wrote:
    >    > You should be able to get much of the optimization you need from
    >    > the HTTP-1.1 ability to keep a connection alive.
    > 
    >    If conditional execution or rollback is required, then one is limited
    >    to complete network turnaround between operations, plus the added burden
    >    of additional checks to make sure that overlapping operations have not
    >    modified the underlying state, or for 3rd parties, that they have not
    >    picked up partial state for a composite operation. This is significant
    >    overhead vs a BATCHed atomic operation and is not helped by keep-alive.
    > 
    > Yes, I agree (the live connection only saves you the cost of re-connecting).
    > Unfortunately, requiring atomic behavior for compound operations (especially
    > for client defined compound operations) pretty much limits the possible
    > implementations to database systems, and for scalable multi-user access,
    > a fairly sophisticated database system.
    > 
    >    Is there the intention to have such issues be addressed in a
    >    broader context or a supplement to this suppplement? Or is it the
    >    collected wisdom that these are not (at least immediately, or
    >    without further experience) issues for concern?
    > 
    > They are issues for concern, but no interoperable solution that is
    > implementable on a wide variety of repositories has been proposed.
    [ ...]
    > 
    > Cheers,
    > Geoff