- From: Alex Rousskov <rousskov@measurement-factory.com>
- Date: Wed, 24 Nov 2004 09:18:48 -0700
- To: "Lisa Dusseault" <lisa@osafoundation.org>, "HTTP working group" <ietf-http-wg@w3.org>, "'simple@ietf.org'" <simple@ietf.org>
On Sun, 2004/11/21 (MST), <lisa@osafoundation.org> wrote: > the XCAP resource ontology and the URL addressing style that goes with > it shifts the HTTP design along two major axes: > > 1) Resource granularity > 2) Dependency between resource I disagree that HTTP defines some specific size and number of server resources (what you define as resource granularity). From HTTP point of view, URL paths are almost opaque. I agree that some server implementations are less suitable than others to support XCAP, but I do not see that as a showstopper. Will handling 1000 1-byte objects be as efficient as handling 1 1000-byte object with HTTP? No, of course not. However, handling 1000 1-byte objects may be efficient enough for a given application/environment. And if some proxy breaks while handling large number of small objects, that proxy is not HTTP compliant and should be fixed (to prevent DoS attacks and such). I agree that HTTP assumes that resources are mostly independent. There are no HTTP mechanisms to, say, invalidate a large group of resources with a single response. However, individual applications and environments can deal with it. For example, Apache provides per-directory access controls. ICAP has ISTag header to invalidate all cached responses from a given ICAP server at once. These examples do not use HTTP features, but work fine on top of HTTP. Again, some existing server implementations would be less appropriate for supporting XCAP, but that should not be a showstopper for XCAP. $0.02, Alex.
Received on Wednesday, 24 November 2004 16:18:58 UTC