Re: WGLC: p2 MUSTs

On 4/08/2013 1:48 p.m., Roy T. Fielding wrote:
> On Apr 30, 2013, at 1:46 PM, Alex Rousskov wrote:
>
>>> The CONNECT method requests that the recipient establish a tunnel to
>>> the destination origin server [...], until the connection is closed.
>> The "until the connection is closed" part is misleading and inaccurate.
>>
>> There are two connections in a CONNECT tunnel: (a) between a CONNECT
>> sender and CONNECT recipient and (2) between CONNECT recipient the the
>> next HTTP hop. The tunnel termination condition is rather complex and is
>> detailed later in the same section. It may be a good idea to drop the
>> "until..." part. At least I cannot suggest a way to describe it
>> correctly as an ending of an already long sentence :-).
> Changed to "until the tunnel is closed".
>
>>> When a tunnel intermediary detects that either side has closed its
>>> connection, any outstanding data that came from that side will first
>>> be sent to the other side and then the intermediary will close both
>>> connections. If there is outstanding data left undelivered, that data
>>> will be discarded.
>> These "will"s should be rephrased as intermediary MUSTs IMO. I also
>> suggest moving them higher, before the informal risk discussion.
> Moved, fixed, and rephrased to "A tunnel is closed when ..."
>
>>> A client MUST NOT send header fields in a TRACE request containing
>>> sensitive data
>> The above rule seems too onerous to proxies. Replace "MUST NOT send"
>> with "MUST NOT generate"?
> Fixed.
>
>>> 5.1.1.1 Use of the 100 (Continue) Status
>>> Requirements for HTTP/1.1 clients:
>>> ...
>>> Requirements for HTTP/1.1 proxies:
>> Should we explicitly exclude proxies from the first group of
>> requirements by saying "Requirements for user agents" instead of
>> "Requirements for clients"?
> No, the first set applies to proxies that want to use 100-continue
> for their own reasons.
>
>>> MUST contain an updated Max-Forwards field with a value decremented by one (1).
>> A lot of proxies violate this MUST because they cannot grok and, hence,
>> cannot decrement large integer values. Interoperability problems might
>> happen when a client generates Max-Forwards with a maximum value it can
>> store (e.g., to count the number of hops to the origin server) but the
>> proxy cannot store such a large value (e.g., 64bit vs 32bit).
>>
>> Perhaps we can relax this rule by allowing proxies to decrement by "at
>> least one", so that a huge value can be replaced with the maximum value
>> the proxy can represent?
> Changed to
>
>    If the received Max-Forwards value is greater than zero,
>    the intermediary MUST generate an updated Max-Forwards field
>    in the forwarded message with a field-value that is the lesser of:
>    a) the received value decremented by one (1), or
>    b) the recipient's maximum supported value for Max-Forwards.

Isn't Max-Forwards used for sending OPTIONS and such to a specific hop?

I know these limits are theoretically supposed to be absurdly high. But 
if the implementation decided that the limit would be 2 or something the 
above rules would break tracing. For example the client would get back a 
constant response gnenerated from hop X+2 when it was querying hop X+3 
to X+N.

I think an error response would be better if Max-Forwards is bigger than 
the implementation can support.

FWIW: there are big-math tricks that can be implemented to increment or 
decrement arbitrarily large numeric values using a counter counter as 
small as 8-bit if neccesary so while the speed issue is relevant the 
X-bit overflow shodul not be. The old Co-Advisor test for this was to 
send a 72-bit numeric value in the header and expect successful decrement.

Amos

Received on Sunday, 4 August 2013 02:50:30 UTC