Re: WebDAV and Dynamic Content

Brett McCormick wrote:
> I guess I haven't heard any reasons why the current scheme wouldn't
> satisfy your requirements in a fully interoperable way.

I've only heard that you need different URIs to edit source.  If there
is a way to do what I was talking about, then that's great.

> I feel strongly that the webdav spec should not provide a facility for
> obtaining the source content using the actual uri of the output
> resource.  I feel that this extra mechanism is non interoperable.  The
> DAV server may not even be able to fufill the request for the
> unprocessed source, and the client must then fall back to the existing
> mechanism.

If the DAV server cannot fulfill the request, then it wouldn't be
qualify as "WebDAV class XXX compliant."  If someone is running an older
WebDAV implementation, they could support editing source the way you
talked about it.  I don't think that should restrict us from defining
new functionality in a new version of WebDAV.

> It isn't functionality that is extensible enough for our needs
> (assumes one source document) nor can it be uniformly provided across
> DAV server implementations.

I think we should worry about how to grow the spec to handle the needs
of users and admins, and then separately worry about how to implement
the spec as a vendor (either open source or proprietary).  HTTP/1.0 had
very limited caching capabilities, and even though most proxy servers
would have to be rewritten to support more advanced caching, the W3C
still defined more advanced caching in HTTP/1.1.

But rather than belabor my argument ad naseum, I'd like to suggest two
alternatives (that were already raised I believe):

1. GETSOURCE request method
This approach suggests using GETSOURCE to retrieve the source for a
URI.  As a different HTTP request method, I would think that even older
proxies might just pass this through.  This doesn't address how you
would ensure a lock or a put (or anything else) isn't sending/locking
the processed content.  You would need to indicate somehow that the
client is aware it is working with source code vs. processed code in
these other methods.

2. New header in the request (and response)
This approach suggests using an additional header to the request that
indicates the client wants to work "at the source version" resource. 
The server could return this header (or a directly related one) to
indicate that the client is receiving/working with a source version of
the URI.  (the lack of this header would indicate it is a processed
version, or the server does not support editing source this way).  This
header could be passed with all necessary requests, and ensure the
source is not overwritten with processed content.  Apparently there are
problems with the proxy when relying on headers, but I think that if the
client expected an appropriate header in response, it could see that a
proxy was returning the processed version instead of the source version
and deal accordingly.  This (I believe) is similar to how keep alive
connections work... the server has to return an appropriate header, so
both client and server know that the other is compliant with this new
functionality.

Other alternatives might be to add parameters in the XML that is passed
back and forth, but I think most of the difficulty surrounds the actual
content retrieval request, which is something that (I don't think)
involves XML meta data.

Serge Knystautas
Loki Technologies
http://www.lokitech.com

Received on Sunday, 21 November 1999 21:48:16 UTC