- From: <hallam@w3.org>
- Date: Mon, 11 Mar 96 21:32:00 -0500
- To: Larry Masinter <masinter@parc.xerox.com>
- Cc: hallam@w3.org, http-wg-request%cuckoo.hpl.hp.com@hplb.hpl.hp.com
>if you > > GET http://foo.com/test#frob HTTP/1.0 > >they might ask foo.com for > > GET /test%23frob HTTP/1.0 > >or vice versa. Is there any reason to disallow this, and if so, what >language would be put in the spec to disallow it; alternatively, if >proxies might do this kind of transformation, what should we say? I would say we have to let this pass. Principally because of the large number of proxies out there which already do this. There is a principled reasoning behind this. We distinguish a proxy or gateway from a tunnel by the level at which the agent acts. Tunnels act purely at the syntactic level. Proxies and gateways act at the semantic level. We permit a proxy to perform any action provided it is semantically neutral and in addition may permit a number of semantic transformations. [Another area to consider is using URLs as a subliminal channel or a basis for steganography. The URL above effectively allows us to encode 0s and 1s. There may well be cases where we don't want that to happen. This is a fairly pointless nit to pick but I thought I would get in first] The reason why the semantic transformation stuff matters is that a firewall proxy may want to strip out all header lines it does not understand, effectively down-rating the connection to HTTP/1.0 or some other known safe subset. This requires a complete parse of the request and regeneration of the passed on request. The parser may wish to specifically check URLs for strict validity to avoid possible holes occurring with illegal escape sequences. Thus it is a legitimate proceedure to parse out the URL, cannonicalise and regenerate. I think that if we want strict syntactic netrality in a proxy we have to use the wrapped method. Phill
Received on Tuesday, 12 March 1996 01:59:29 UTC