Re: Backwards compatibility


On Mar 30, 2012, at 3:41 PM, Adrien W. de Croy wrote:


------ Original Message ------
From: "Mark Watson" watsonm@netflix.com<mailto:watsonm@netflix.com>

Send the requests (yes, pipelined). If they come back without ids, then they are coming back in the order they were sent. If they come back with ids, then that tells you which response is which.

there could be pathological cases where some come back with IDs and some without.

I don't see how that could be the case if every intermediary on the path has indicated that it supports the extension. But I was not presenting a detailed protocol design, just an illustration of the type of backwards compatible design approach I was advocating. Work would certainly be required to design it.



The former incurs a large latency cost. The latter depends very much on how deployable you view pipelining on the overall internet.

It's certainly widely deployed in servers and non-transparent proxies. Non-supporting non-transparent proxies are easily detected. Yes, broken transparent proxies are a (small) problem, but you can also detect these.

I am skeptical it is sufficiently deployable and we on Chromium are gathering numbers to answer this question (<http://crbug.com/110794>http://crbug.com/110794).

Our internal figures suggest that more than 95% of users can successfully use pipelining. That's an average. On some ISPs the figure is much lower.

Do you keep stats of how many of those 95% are not going through a proxy of any (detectable) kind?  I'd imagine the proportion (of directly-connected users) to be quite high.

No, we don't have that information.

Interleaving data from multiple responses requires some kind of framing, yes. Chunked transfer encoding is a kind of framing that is already supported by HTTP. Allowing chunks to be associated with different responses would be a simple change. Maybe it feels like a hack ? That was my question: why isn't a small enhancement to the existing framing sufficient ?
I think there would be interop issues.

Can you elaborate ?



Putting my question another way, what is the desired new feature that really *requires* that we break backwards compatibility with the extremely successful HTTP1.1 ?

Multiplexing,

See my question above

header compression,

Easily negotiated: an indicator in the first request indicates that the client supports it. If that indicator survives to the server, the server can start compressing response headers right away. If the client receives a compressed response it can start compressing future requests on that connection. It's important that this indicator be one which is dropped by intermediaries that don't support compression.

prioritization.

I think you mean "re-priortization". I can send requests in priority order - what I can't do is change that order to response to user actions. How big a deal is this, vs closing the connection and re-issuing outstanding requests in the new order ?
I'd like to add

support for new additional semantics.  Such as aren't possible if there's a 1.1 hop in the chain, but otherwise possible.

An example is some sort of subscribed notification, where you can send a single request, and get any number of responses with entities, as and when the server feels is right to send.

Think Facebook new message notifications, or online shopping card transaction status.

That indeed would be a new protocol, if you can make the case for providing that functionality at the HTTP layer, compared to the application layer where it lives today.


Adrien





¡­Mark



¡­Mark

Received on Friday, 30 March 2012 23:11:42 UTC