Re: PUSH_PROMISE and load balancers

I know there are ways to work around it, but it seems sub-optimal to 
need to maintain two separate connection pools for push enabled vs 
non-push enabled clients, especially when it seems to be easy to fix on 
a protocol level.

I was also thinking that is might be possible to address this as an 
extension, you could send a frame to disable push before sending the 
headers frame, but with compression a custom header will probably 
actually turn out to be less data per request anyway.

Stuart

Martin Thomson wrote:
> The easiest solution is to use two connections: one for push-enabled
> clients, and one for those without push support (including HTTP/1.1
> clients as well as clients that disable push).
>
> The SETTINGS_ENABLE_PUSH flag can be different on each connection.
>
> On 26 September 2014 01:33, Stuart Douglas<stuart.w.douglas@gmail.com>  wrote:
>> So I have been thinking about the case where you have a HTTP2 aware load
>> balancer that serves both HTTP1 and HTTP2 clients, and uses HTTP2 to connect
>> to the backend servers.
>>
>> Such load balancers will generally maintain a connection pool to the backend
>> servers, and to allow PUSH_PROMISE to be used will need to enable push on
>> the connection.
>>
>> I am thinking about then case when a HTTP1 client connects to this proxy,
>> the backend servers will attempt to use PUSH_PROMISE to push content to the
>> server, even though the load balancer knows in advance that it cannot accept
>> the content.
>>
>> It would be possible to hack around this to some extent (e.g. using a custom
>> header to signify that push should be disabled for this request), however I
>> was thinking that a much nicer solution would be to add a flag to the
>> HEADERS frame indicating that push should be disabled for this request only
>> (i.e. no PUSH_PROMISE frame should be sent with this request as the
>> associated stream id).
>>
>> Note that you can't really just send a SETTINGS frame before each request to
>> alter the SETTINGS_ENABLE_PUSH flag. This will work for the simple case if
>> you are allocating one connection per request, however it won't work if the
>> proxy is multiplexing requests from different clients over the same HTTP2
>> connection.
>>
>> Thoughts?
>>
>> Stuart
>>
>>

Received on Friday, 26 September 2014 07:21:35 UTC