W3C home > Mailing lists > Public > www-p3p-public-comments@w3.org > March 2000

Re: Representing Lifetimes of Direct Policy Refernces

From: Lorrie Cranor <lorrie@research.att.com>
Date: Wed, 22 Mar 2000 10:21:06 -0500
Message-ID: <013901bf9412$3ec84e20$9816cf87@research.att.com>
To: <www-p3p-public-comments@w3.org>, "Martin J. Duerst" <duerst@w3.org>
Cc: <mpresler@us.ibm.com>
Martin J. Duerst <duerst@w3.org> wrote:
> This is a rather complicated feature, and it would be better
> if it could be avoided or simplified.
> The policy reference in a HTTP header or a <meta> is part of
> that response/document, so why not simply apply the chaching
> directives of the response itself? What is the need for having
> separate directives just for the policy reference?

Let me respond by forwarding an explanation prepared
by one of our working group members, Martin Presler-Marshall,
in response to a similar question:

>> The way I understand HTTP, the document's expiration time is an
>> indicator of when an attribute of the document has changed.  For
>> example, when the content is no longer valid an attribute of the
>> document changes (staleness).  It appears to me that the policy
>> associated with a document is an attribute of the document.  I would
>> have thought, rather than requiring work from all client implementors
>> and proxy implementors to understand new HTTP headers, that the document
>> would be expired to indicate that an attribute, the policy, has
>> changed.  This is how HTTP works and how user agents and proxies
>> function now.  I'm certain I'm missing something here.
There are two major differences here. First is that the cache-control
headers defined by HTTP/1.1 indicate cacheability of documents. HTTP
does not define any cacheability of links at all. For efficency
reasons, P3P needs to be able to define the cacheability of links.
Document-to-policy links may have much larger scopes than 1-to-1:
in fact, we expect it to be a many-to-1 scope for almost all
implementations. In addition, these links may have lifetimes very
different from the lifetime of the underlying document. For example,
the dynamic output of a CGI program is practically never cacheable
in HTTP, but the link between that dynamic output and its privacy
policy may well be cacheable.

The different scopes and different expected lifetimes drove us to
create a new caching mechanism. I will note, however, that the
mechanisms we picked were derived from the Cache-Control
mechanism in HTTP/1.1. This was done to make implementation simpler
for applications. Note also that for a proxy or gateway, it need not
do any processing of these new headers unless it is offering some
P3P functionaility. These new headers will pass through an HTTP/1.0
or HTTP/1.1 proxy without causing it - or the final recipient - to

A third, but less crucial consideration was also made. In my (significant)
experience with caching HTTP documents, the vast majority of cacheable
documents have cache lifetimes which are computed by calculations from
the last-modified date of the document. We didn't feel that this sort of
heuristic cache lifetime computation was appropriate for caching
references to privacy policies.

>> Relating to the authoritative cache policy.
>> As I understand it, the caching problem presented, is a common problem
>> in HTTP and one that must be addressed in every user agent
>> implementation.  Namely that the client and server clocks are out of
>> synch.  HTTP 1.1 talks about resource age calculations in some detail.
>> I don't see how adding more cache control headers fixes the problem.  I
>> mean you still have propagation delays and the possibility for
>> asymmetric request/response.
>> For example: I make a request for some content from a server.  The
>> response comes back at Time X with a max-age of "10".  Now one of the
>> things the client has to figure out is the response delay.  That is
>> simply ResponseTime-RequestTime.  If it took 14 seconds for the server
>> to accept the request but only 6 seconds for the server to generate the
>> response, then the client sees the response delay as 20 seconds.  The
>> client then expires the content incorrectly since "in reality" there are
>> still 4 seconds left on the content's freshness lifetime.
It is impossible to fix this problem completely without including the
"expected" policy on the client's request, and having the server reject
the request if the policy has been changed. This sort of mechanism will
move the "active policy" decision to a single point (the server), where
is no clock slew or network latency to make things difficult. However, we
felt that a mechanism like this, while easy to describe, this increased
the barrier to entry for P3P-enabling sites too greatly, so it was not
included in P3P v1.0.

>> This makes things expire sooner rather than later which isn't a bad
>> thing considering the content.  But, work is required to implement this
>> behavior in clients and proxies for limited benefit.  (I am of the mind
>> that explicitly caching references is not needed at all).

I expect that there will be some very important cases which could be helped
significantly by caching policy references. One example is form submission:
if the HTML page containing the form indicates the policy which covers the
ACTION URI, and how long that's good for, then the user-agent can take
action before the form is submitted. Without this information, the
would have to make a ficticious request before the actual form-submission
request to determine the policy applying to the ACTION URI. This would
dramatically slow down form submission (by adding extra latency), as well
increase the load on servers.

There are other cases as well; I point this one out only as an example.
Received on Wednesday, 22 March 2000 10:21:48 GMT

This archive was generated by hypermail 2.2.0+W3C-0.1 : Tuesday, 21 September 2004 12:14:16 GMT