Re: Backwards compatibility


------ Original Message ------
From: "Mark Watson" watsonm@netflix.com
>
>On Mar 30, 2012, at 3:41 PM, Adrien W. de Croy wrote:
>
>>
>>------ Original Message ------
>>From: "Mark Watson" watsonm@netflix.com
>>>
>>>Send the requests (yes, pipelined). If they come back without ids, 
>>>then they are coming back in the order they were sent. If they come 
>>>back with ids, then that tells you which response is which.
>> 
>>there could be pathological cases where some come back with IDs and 
>>some without.
>
>I don't see how that could be the case if every intermediary on the 
>path has indicated that it supports the extension. But I was not 
>presenting a detailed protocol design, just an illustration of the 
>type of backwards compatible design approach I was advocating. Work 
>would certainly be required to design it.
I was just thinking of a LB that talks to 2.0 and 1.1 servers on the 
back end from an incoming 2.0 stream of requests.  If any back end 
server was only 1.1, then the mux couldn't send any IDs.... it might 
not know that until too late though.  A better option for it may be to 
assign its own ID though.
  
>
>> 
>>>
>>>>The former incurs a large latency cost. The latter depends very 
>>>>much on how deployable you view pipelining on the overall internet.
>>>
>>>It's certainly widely deployed in servers and non-transparent 
>>>proxies. Non-supporting non-transparent proxies are easily detected. 
>>>Yes, broken transparent proxies are a (small) problem, but you can 
>>>also detect these.
>>>
>>>>I am skeptical it is sufficiently deployable and we on Chromium are 
>>>>gathering numbers to answer this question (http://crbug.com/110794).
>>>
>>>Our internal figures suggest that more than 95% of users can 
>>>successfully use pipelining. That's an average. On some ISPs the 
>>>figure is much lower.
>> 
>>Do you keep stats of how many of those 95% are not going through a 
>>proxy of any (detectable) kind?  I'd imagine the proportion (of 
>>directly-connected users) to be quite high.
>
>No, we don't have that information.
>
>>>>Interleaving data from multiple responses requires some kind of 
>>>>framing, yes. Chunked transfer encoding is a kind of framing that 
>>>>is already supported by HTTP. Allowing chunks to be associated with 
>>>>different responses would be a simple change. Maybe it feels like a 
>>>>hack ? That was my question: why isn't a small enhancement to the 
>>>>existing framing sufficient ?
>>I think there would be interop issues.
>
>Can you elaborate ?
  
intercepting intermediaries that think they know what they are looking 
at.
  
These are legion, at least in NZ ISPs, and many corporates.

  
>
>> 
>>>> 
>>>> Putting my question another way, what is the desired new feature 
>>>> that really *requires* that we break backwards compatibility with 
>>>> the extremely successful HTTP1.1 ?
>>> 
>>> Multiplexing,
>> 
>> See my question above
>> 
>> >header compression,
> >
> >Easily negotiated: an indicator in the first request indicates that 
> >the client supports it. If that indicator survives to the server, 
> >the server can start compressing response headers right away. If the 
> >client receives a compressed response it can start compressing 
> >future requests on that connection. It's important that this 
> >indicator be one which is dropped by intermediaries that don't 
> >support compression.
> >
> >>prioritization.
 >>
 >>I think you mean "re-priortization". I can send requests in priority 
 >>order - what I can't do is change that order to response to user 
 >>actions. How big a deal is this, vs closing the connection and 
 >>re-issuing outstanding requests in the new order ?
>>I'd like to add
>> 
>>support for new additional semantics.  Such as aren't possible if 
>>there's a 1.1 hop in the chain, but otherwise possible.
>> 
>>An example is some sort of subscribed notification, where you can 
>>send a single request, and get any number of responses with entities, 
>>as and when the server feels is right to send.
>> 
>>Think Facebook new message notifications, or online shopping card 
>>transaction status.
>
>That indeed would be a new protocol, if you can make the case for 
>providing that functionality at the HTTP layer, compared to the 
>application layer where it lives today.
  
where it lives today is on top of HTTP, which creates problems for the 
underlying HTTP infrastructure.  Things like building effectively TCP 
over 2xHTTP connections over TCP.  It's at best very inefficient.  It 
works by and large though.  Do we still want to be doing it that way in 
20, or 100 years?  
  
Adrien

  
>
>> 
>>Adrien
>> 
>> 
>>
>> 
>>>
>>>…Mark
>>>
>>>> 
>>>> 
>>>> …Mark
>>>> 
>>>> 
>>>> 
>>> 
>> 
> 
 

Received on Friday, 30 March 2012 23:18:16 UTC