Re: Optimizations vs Functionality vs Architecture

I think the same approach can be used for intermediaries and shows why
having both mechanisms is important.

A HTTP/2.0 connection can fail for two reasons:

1) The server does not support HTTP/2.0
2) An intermediary in the current path is blocking it.

Real machines move from one network to another so it is important to
be able to distinguish between the two modes if the information is
going to be cached. Otherwise we end up in a situation where the cache
is being corrupted by the intermediary.

If the reason HTTP 2.0 is failing is the intermediary blocks it then
the client should stop trying HTTP 2.0 altogether until the network
conditions change. It should certainly not attempt to remember the
effect of trying HTTP/2.0 at specific sites.

If the reason it failed is that the server only supports 1.1, that is
something that can and should be remembered for future use with some
form of reasonable expiry policy so that one failure in 2012 does not
continue to apply in 2020.



On Tue, Aug 21, 2012 at 4:47 PM, Yoav Nir <ynir@checkpoint.com> wrote:
>
> On Aug 21, 2012, at 10:14 PM, Poul-Henning Kamp wrote:
>>> We should if it's possible. Suppose HTTP/2.0 looks much like the SPDY draft.
>>> How can you ever get a current HTTP/1 server to reply to this?
>>
>> That's why I've been saying from the start that SPDY was an interesting
>> prototype, and now we should throw it away, and start from scratch, being
>> better informed by what SPDY taught us.
>
> A requirement for downgrade creates too many restrictions, even if we throw SPDY away. The beginning of a 2.0 connection would have to look enough like 1.x so as to fool existing servers.
>
> I think we should live with upgrade only, as long as clients can cache the knowledge that a certain server supports 2.0, so that they can skip the upgrade the next time. The extra roundtrip on a first encounter is not that bad.



-- 
Website: http://hallambaker.com/

Received on Tuesday, 21 August 2012 20:54:53 UTC