W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 1997

determining proxy reliability

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Tue, 18 Mar 97 14:14:30 PST
Message-Id: <9703182214.AA17555@acetes.pa.dec.com>
To: mcmanus@appliedtheory.com
Cc: http-wg@cuckoo.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/2727
(This was Re: Unverifiable Transactions / Cookie draft, but
I think the topic has drifted far enough to merit a new Subject).

    From: Patrick McManus <mcmanus@appliedtheory.com>
    In a previous episode Ted Hardie said...
    :: Just to clarify, the proposals are to standardize a method
    :: to *allow* proxies to report this kind of data.  Nothing in the
    :: proposals *makes* anyone do anything.  Jeff and Paul
    :: were very clear about that from the beginning, and it
    :: keeps the hit-metering draft out of the scary
    :: "big-brother" category.
    Right on.. and to clarify a little further when serving to a proxy
    the origin server is told whether or not the proxy pledges to
    return this information at a later date.. if it doesn't they can
    cache bust.. the weakest point of the hit-metering draft IMHO is
    that it doesn't try and provide any other methods of determing
    proxy reliability wrt this pledge to base the "to cache or to bust"
    decision on..
It is true that there is no technical mechanism in the hit-metering
proposal to prevent a proxy from agreeing to hit-meter a response,
and then not doing so.  The proposal states MUST-level requirements,
but provides no means to verify that they are always observed.

But this is not any different from any other HTTP protocol requirement.
For example, an origin server can send
	Cache-control: no-store
to a proxy that identifies itself (in the request header) as
compliant with HTTP/1.1, but there is no way for the origin server
to verify that the proxy actually obeys this directive.

If anyone can suggest "other methods of determining proxy
reliability with respect to this pledge" (or any other pledge
implied by an HTTP/1.1 version number), I'd be interested.  But
in general, this reduces to the problem of copy-protection in
fully digital representations.  The only way that I know how to
solve this, in a network that spans administrative boundaries,
is to use both end-to-end encryption and tamper-resistant
decryption hardware at the client end.  But this doesn't seem
either feasible or desirable.

There are non-technical means to verify compliance (auditing,
planting fake information to trick people into exposing
copyright violations, etc.), but these are beyond the scope
of a protocol specification.

Received on Tuesday, 18 March 1997 14:24:35 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 2 February 2023 18:43:01 UTC