W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: Comments/Issues on P1

From: Mark Nottingham <mnot@mnot.net>
Date: Tue, 24 Apr 2012 17:21:52 +1000
Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>, John Sullivan <jsullivan@velocix.com>
Message-Id: <668C75FF-C3DF-486B-9AAF-CD9159BA3A44@mnot.net>
To: Ben Niven-Jenkins <ben@niven-jenkins.co.uk>
Hi Ben,

Just so it's clear: p1 is NOT in WGLC right now; you're welcome to review it, but the document is still in flux. 

If you think there are substantive issues in the document (i.e., things that are not likely to be changed in an editorial rewrite), you're welcome to raise issues now, but it may be more productive to wait until it settles a bit.

Finally, if/when you do raise issues, please include the document revision you're making comments upon, and we prefer substantive issues to be raised one-per-email.

Cheers,


On 24/04/2012, at 4:11 AM, Ben Niven-Jenkins wrote:

> Hi,
> 
> As part of reviewing the HTTPBIS documents we had the following comments/issues on P1
> 
> 
> 1) On page 11 it states:
> 
>   All HTTP requirements
>   applicable to an origin server also apply to the outbound
>   communication of a gateway. [...]
> 
>   However, an HTTP-to-HTTP gateway that wishes to interoperate with
>   third-party HTTP servers MUST conform to HTTP user agent requirements
>   on the gateway's inbound connection
> 
> This is probably a good default assumption, but it is not always true:
> 
> * A User-Agent can, either automatically or under interactive
>  user direction, decide to retry non-idempotent requests. An
>  intermediate must never do this.
> 
> * An origin server is always authoritative, an intermediate is
>  not and so can sometimes not make certain decisions that an
>  origin could. (See If-Match below for an example.)
> 
> 
> 2) On page 13 it states:
> 
>   thereby letting the recipient know that more advanced
>   features can be used in response (by servers) or in future requests
>   (by clients).
> 
> and on page 56 it states:
> 
>     o  Proxies SHOULD maintain a record of the HTTP version numbers
>        received from recently-referenced next-hop servers.
> 
> A server does not appear to be committed to supporting the same HTTP version from request to request, or from one URL to another on the same server. (As an example at the same address and under the same vhost, some URLs might be served by the "real" http server which fully supports HTTP/1.1, and some by CGI scripts which might only support HTTP/1.0.)
> 
> Therefore it seem unwise to rely on the upstream server supporting a particular version from request to request, unless there is a tighter coupling between them (for example, under the same organisational control) and the intermediate can be configured to specifically assume a given HTTP version for a given upstream.
> 
> The above is also unclear about what constitutes a "next-hop server": if the hostname resolves to multiple addresses, they are not all necessarily running the same version of the server, or even the same server software at all. The same is probably even more likely for different ports on the same machine.
> 
> Different vhosts might be directed to different backends with different capabilities, and as mentioned the same is true for different URLs even on the same address/port/vhost. The most conservative interpretation would be to match on address, port, vhost and URL, but that seems to require remembering an excessive amount of history.
> 
> 
> 3) On page 18 it states:
> 
>   Resources made available via the "https" scheme have no shared
>   identity with the "http" scheme even if their resource identifiers
>   indicate the same authority (the same host listening to the same TCP
>   port).  They are distinct name spaces and are considered to be
>   distinct origin servers.
> 
> While this is true for a separate shared cache, a common deployment might be gateways employed as protocol translators/accelerators. A small number of backend origins without enough oomph to serve HTTPS directly behind a larger set of more powerful HTTPS enabled gateways under the same organisational control, linked over an internal-only private network. Here the internal traffic is "http://" requests, but the external traffic is "https://" with the obvious direct mapping of pathuris from one to the other.
> 
> The same consideration applies to other protocol elements which are normally mandatory for intermediates to add, modify in specific ways, or relay untouched. For example an organisation might not want their gateways emitting Via headers at all, so that the gateway looks more like a real origin to external clients, or may want the gateway to set specific Expires/max-age values on egress which are different from those supplied by the real origin server.
> 
> Since such requirements of the specification which are currently mandatory without exception *are* going to be violated in practice, sometimes with good reason, sometimes with less good reason, it might be useful to explore the boundaries of when intermediates can get away with this and when it is definitely wrong. It would seem that a single organisation doing this with their own gateways, or a service provider of that organisation doing so under the express direction of the organisation is on pretty solid ground, whereas an intercepting proxy controlled by a completely unrelated entity should probably never do so.
> 
> 
> 4) (Related to pages 19-21) A non-conforming implementation might generate octets, or sequences of octets, that are out of spec. For example:
> 
>  * URLs containing unencoded NUL bytes
>  * URLs containing unencoded CTL or high-bit bytes
>  * NUL bytes anywhere elsewhere within the start-line or headers
>  * CTLs or high-bit bytes elsewhere within the start-line or headers
>  * non-token non-SP characters within the method
>  * multiple SP characters, or alternative whitespace, separating
>    start-line elements
>  * non-token non-whitespace characters within a field-name
>  * invalid (%ZZ) or truncated (%4z in the middle or %4 at the end)
>    %-hex sequences in the URL.
> 
> No specific behaviour appears to be mandated under these conditions, so it appears to be valid for a server to attempt to satisfy the request as received, or an intermediate to relay them. An intermediate might not even notice the invalid characters if it has no other reason to parse the affected protocol element.
> 
> In practice origin servers mostly generate 4xx errors, and an intermediate might either generate a direct 4xx or relay and rely on the origin doing so. NUL bytes in URLs are especially dangerous and a known attack vector, so it would seem sensible that detecting and generating a 400 there ought to be mandatory. Probably with NULs elsewhere too. The others could be argued either way but it would be good to have some specific direction in the specification.
> 
> And although 3.1 disallows whitespace between the start-line and first header, no specific reaction appears to be mandated there either. The 3.2.2 rules seem like a suitable approach.
> 
> 
> 5) (The Note at the top of page 23)
> 
> Set-Cookie2/Cookie2 don't appear to ever have had widespread usage, and are now officially deprecaded by RFC 6265. Pointing out that they do not suffer from a problem in Set-Cookie/Cookie, which might be taken as encouraging their use, without pointing this out seems odd.
> 
> 
> 6) On page 56 it states:
> 
>   o  A proxy MUST NOT forward a 100 (Continue) response if the request
>      message was received from an HTTP/1.0 (or earlier) client and did
>      not include an Expect header field with the "100-continue"
>      expectation.  This requirement overrides the general rule for
>      forwarding of 1xx responses (see Section 7.1 of [Part2]).
> 
> Although I *think* this is an improvement over previous wordings, it does still seem to mandate that a HTTP/1.1 proxy forward a 100 Continue response to a HTTP/1.0 request which included an "Expect: 100-continue" header. Since a HTTP/1.0 downstream proxy is likely to have blindly forwarded this header, but not be capable of understanding a 100 Continue response this seems strange.
> 
> (It appears that this has been discussed several times on the httpbis list without consensus being reached.)
> 
> The downstream could do any number of bad things in response to a relayed 100 response: cache the final response headers and body as if they were both the response body. Get "stuck" waiting for the "end" of the message that it never sees. Pass the entirety of the following data on the connection to the wrong client.
> 
> It seems safest to simply state that a 100 Continue response must *never* be transmitted in response to a HTTP/1.0 request.
> 
> Secondly, the same applies to any other 1xx response, but even the above exception doesn't apply to them: a 1.1 proxy must apparently always forward a 1xx response that it knows the downstream will probably not be able to parse.
> 
> Thirdly, the client requirements state "the client SHOULD NOT wait for an indefinite period before sending the request body." I think 2616 only applied this to user-agents and not proxies, so just to clarify, is it intended that the above language now apply to intermediates as well? (I think it should, but although a user-agent can simply start transmitting the body on timeout,
> an intermediate may need to synthesize a 100 Continue response to force the downstream to start transmitting. Assuming that the downstream is HTTP/1.1 capable of course...)
> 
> Thanks
> Ben
> 
> 

--
Mark Nottingham   http://www.mnot.net/
Received on Tuesday, 24 April 2012 13:59:07 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:52:00 GMT