Re: list of pending proposals

Balint Nagy Endre writes:

 In this case, we can define the sanity check as:
 > URI given in a Location header field value and the request-URL
 > must have common prefix
	...
 > Adding this restriction to location-URI excludes the forgery problem,
 > but may have unvanted side effects.

Some sanity similar to what mention seems reasonable -- but it is safe to
allow such sanity checks to be optional, as the worst thing that could
happen is for a page to be removed from a cache prematurely. So the
sanity checks don't need to be defined as part of the protocol.

	...
 > Rationale behind server-control:
 > If some decisions can and should be made on extension headers, only proxies
 > understanding those extension headers are allowed to forward the request.
 > (this may be not eligible in some cases, may leave open backdoors built
 > into extension methods. I'm sure, we shall discuss security aspect first.)

Rationale behind the permissive version of this: If you have client
software inside your protected network that is capable of issuing
custom HTTP method requests, you can say one of two things about that
software: (1) you trust it, or (2) you don't.  If you trust it, you
don't need to worry (much) about what kinds of methods it
invokes. (Most people implicitly seem to trust browser software they
pick up freely off the net, for instance).  If you do *not* trust it,
then it is irrelevant whether it only issues standard HTTP methods
your proxy recognizes or not, since it is a trivial matter for the
purveyor of this untrusted software to have put security violations
into the code for standard methods just as easily as into a custom
method.  If, on the other hand, some standard method in HTTP were
known to reveal information about your network that your proxy would
not permit, then it makes sense for the proxy to prevent such a method
request from passing.  But notice, we're now talking about a *standard*
method, and so the argument about extensibility no longer applies.

A related case in point: people install Windows 95 and start Microsoft
Network.  This software then tells MSN what you've got on your disk.
(I don't know if it really does - we've all heard the story).  The
problem isn't at the protocol level, it's in trusting software you
allow onto your system without knowing what it does.  In the context
of HTTP, restricting against unknown methods is closing the barn door
after the horse has run away.  So, my attitude is "why bother"?
Pretend security is worse than no security.
	...

 > >  > > 8a.  corollary: request methods should NOT be used as part of the
 > >  > > cache key for returned entities.  The reason: multiple entries under
 > >  > > the same URI contribute to the "evil doppelganger" problem.  (Among
 > >  > > standard HTTP methods, only GET and HEAD could ever fetch the cached
 > >  > > results of other method requests).  Cacheability of returned results
 > >  > > is entirely controlled by metadata in headers.
 > >  > Sanity checks again.
 > > What sanity checks do you mean? This proposal is actually a
 > > proposal in the direction of making things more CONSERVATIVE.
 > > An even more conservative approach would be that only GET can cache its
 > > results and use those results later, but I don't like that.
 > If we have enough good sanity check, we don't need this restriction.
 > Not having sanity checks, this restriction *may* be a MUST.
 > > 	...

I don't think so, because it appears to be a common practice to do it as
I suggest already. (I know this from experiments on different browsers
and online systems, not from seeing the code).  I'd just like to make
this explicit.

 > Andrew. (Endre Balint Nagy) <bne@bne.ind.eunt.hu>

Shel Kaphan
sjk@amazon.com

Received on Friday, 15 September 1995 10:24:15 UTC