W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2004

Re: Some thoughts on XCAP's resource architecture

From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Wed, 24 Nov 2004 13:29:54 -0700
To: "Lisa Dusseault" <lisa@osafoundation.org>
Cc: "HTTP working group" <ietf-http-wg@w3.org>, "'simple@ietf.org'" <simple@ietf.org>
Message-ID: <opshzkj4xxiz3etf0c9082f7@pail.measurement-factory.com>

On Wed, 2004/11/24 (MST), <lisa@osafoundation.org> wrote:

> And the extensions that
> have been written to HTTP most definitely assume a larger granularity.

I am not sure that is true, let's go through your (edited) list :

> A resource can be queried for the current Entity Tag if ...
> A resource has its own last-modified timestampA resource has a  
> Content-Type and a Content-Length
> A resource may have an entity Digest (Content-MD5)
> A resource may be cacheable.
> You can ask an HTTP server what methods may be applied to a resource.
> A resource may be downloadable in byte-ranges.

All of the above can still "work" when resource is a result of a query,  
Some servers will support some of the above features for XCAP, some will  

> Hit-metering is typically measured per resource

Hit metering (as specified in IETF) is dead, but would still "work" for  
XCAP with post-processing scripts merging individual hit counters, for  
example; and nothing prevents folks specifying a "more XCAP-friendly" hit  

> A resource .... if the server supports WebDAV.

I agree that editing some XCAP resources via WebDAV would be difficult,  
but I think it would be still technically possible. Just do not think of  
an XML document as a single file. Can you name a single WebDAV feature  
that would be impossible to implement for most XCAP resources?

> A 'diff' from a previously downloaded copy of a resource can be obtained  
> if ...
> A resource may have a version history if ...
> A resource may have a "working copy" in another location if ...
> A resource may have a 'comment' property if ...
> A resource may be checked out and checked in if ...
> A resource has its own access control list if ...

Again, all seem to be technically possible with XCAP resources, and not  
that difficult if XML storage model is more XML-friendly than a regular  
"file on disk".

> A resource may be given an ordering within a collection if the server  
> supports RFC3648.

Not sure about this one, but I think that some DTDs allow for any ordering  
of XML nodes inside an XML element, so perhaps the above is also  
applicable, in some environments.

> So when I look at what a resource is across all these HTTP extensions  
> and in HTTP itself, and what XCAP wants to do, it seems to me that more  
> often than not the XML document is the resource, and the XML node should  
> simply be a part of that resource.

That view is one of the many valid views, IMO.

> We might think it would be nice to lock and add access control  
> independently for every XML node but I don't think that will be  
> manageable.

Why not? If XML document is a collection of individually-managed nodes, I  
do not see a problem. As a simple example, imagine a CGI script that  
assembles an XML document from 100 nodes, each stored in its own file. As  
a more flexible example, imagine an XML-friendly database that allows  
users to have user-defined attributes for every XML query result (which is  
nothing but a "view" in a database terminology).

> Certainly the per-node version history seems prohibitive

Why? If my XML document is a collection of client information nodes, I can  
write an interface that will show per-client history of changes.

> while the ability to view past versions of the whole document (and  
> revert to a past version) seems potentially useful.

One does not preclude the other, I think.

> That's what I mean by the granularity problem with extensions -- the  
> choice of granularity for "what is a resource" has a lot of implications.

It seems to me that you have an "XML document is a file" assumption that  
is not true in general.

Also, I am not sure we should attack one HTTP application based on what  
other HTTP applications or extensions can or cannot do. Each extension has  
its niche and they do not have to perfectly overlap all the time  
(otherwise, they should be included into the core protocol!).


Received on Wednesday, 24 November 2004 20:30:03 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:38 UTC