Re: Some thoughts on XCAP's resource architecture

We're actually on the same page about granularity.  HTTP does not 
*define* a specific granularity, as you said (and as others have 
pointed out, many HTTP implementations are capable of handling a very 
small granularity).  However since HTTP is one of the most widely 
deployed protocol systems we have, where browsers and intermediaries 
interact with a wide variety of servers and not just one host server, 
the practice matters as well as the definition.  And the extensions 
that have been written to HTTP most definitely assume a larger 
granularity.

I thought of another way to describe the resource granularity problem.  
When we say that something is "An HTTP Resource", here's what we imply 
(particularly for static, authorable resources):
A resource can be queried for the current Entity Tag if the server 
supports ETags.
A resource has its own last-modified timestamp, and supports the 
If-Modified-Since and If-Unmodified conditional headers.
A resource has a Content-Type and a Content-Length, and may have an 
entity Digest (Content-MD5)
A resource may be cacheable.
You can ask an HTTP server what methods may be applied to a resource.
A resource may be downloadable in byte-ranges.
Hit-metering is typically measured per resource (see RFC2227 in 
particular)
A resource has a set of queryable properties, including 'getetag', 
'getlastmodified', 'creationdate' and 'getcontentype', if the server 
supports WebDAV.
A resource can be locked (with its own independent lock token, lock 
owner) if the server supports WebDAV level 2.  Each resource has its 
own lock info property.
A resource can be moved with MOVE or copied with COPY if the server 
supports WebDAV.
A resource supports the creation of user-defined properties if the 
server supports WebDAV.
A 'diff' from a previously downloaded copy of a resource can be 
obtained if the server supports RFC3229.
A resource may have a version history if the server supports RFC3253.
A resource may have a "working copy" in another location if the server 
supports RFC3253.
A resource may have a 'comment' property if the server supports RFC3253.
A resource may be checked out and checked in if the server supports 
RFC3253.
A resource may be given an ordering within a collection if the server 
supports RFC3648.
A resource has its own access control list if the server supports 
RFC3744 (with any number of principals named in the list)

Dynamic resources tend not to have all the same characteristics, but 
then they're not authorable.

So when I look at what a resource is across all these HTTP extensions 
and in HTTP itself, and what XCAP wants to do, it seems to me that more 
often than not the XML document is the resource, and the XML node 
should simply be a part of that resource.  We might think it would be 
nice to lock and add access control independently for every XML node 
but I don't think that will be manageable.  Certainly the per-node 
version history seems prohibitive while the ability to view past 
versions of the whole document (and revert to a past version) seems 
potentially useful.  That's what I mean by the granularity problem with 
extensions -- the choice of granularity for "what is a resource" has a 
lot of implications.

Lisa

On Nov 24, 2004, at 8:18 AM, Alex Rousskov wrote:

>
> On Sun, 2004/11/21 (MST), <lisa@osafoundation.org> wrote:
>
>> the XCAP resource ontology and the URL addressing style that goes 
>> with it shifts the HTTP design along two major axes:
>>
>> 1) Resource granularity
>> 2) Dependency between resource
>
> I disagree that HTTP defines some specific size and number of server 
> resources (what you define as resource granularity). From HTTP point 
> of view, URL paths are almost opaque. I agree that some server 
> implementations are less suitable than others to support XCAP, but I 
> do not see that as a showstopper. Will handling 1000 1-byte objects be 
> as efficient as handling 1 1000-byte object with HTTP? No, of course 
> not. However, handling 1000 1-byte objects may be efficient enough for 
> a given application/environment. And if some proxy breaks while 
> handling large number of small objects, that proxy is not HTTP 
> compliant and should be fixed (to prevent DoS attacks and such).
>
> I agree that HTTP assumes that resources are mostly independent. There 
> are no HTTP mechanisms to, say, invalidate a large group of resources 
> with a single response. However, individual applications and 
> environments can deal with it. For example, Apache provides 
> per-directory access controls. ICAP has ISTag header to invalidate all 
> cached responses from a given ICAP server at once. These examples do 
> not use HTTP features, but work fine on top of HTTP. Again, some 
> existing server implementations would be less appropriate for 
> supporting XCAP, but that should not be a showstopper for XCAP.
>
> $0.02,
>
> Alex.
>

Received on Wednesday, 24 November 2004 19:33:50 UTC