W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Stuck in a train -- reading HTTP/2 draft.

From: Johnny Graettinger <jgraettinger@chromium.org>
Date: Wed, 25 Jun 2014 18:49:08 -0400
Message-ID: <CAEn92ToJQNRS9GL_fET5JKchHDCnaaFjR9SQPKG76g=1zRPuUg@mail.gmail.com>
To: Willy Tarreau <w@1wt.eu>
Cc: Roberto Peon <grmocg@gmail.com>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Martin Thomson <martin.thomson@gmail.com>, Jason Greene <jason.greene@redhat.com>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
>
> Please consider this simple use case :
>
>    - requests for /img /css /js /static go to server 1
>    - requests for /video go to server 2
>    - requests for other paths go to server 3
>
> Clients send their requests over the same connection.



Just thinking about how I'd manage this: it seems that the load balancer
would need to have special knowledge as to whether a backend connection is
likely to send a large frame or not.

If they are, and you're not terminating SSL on either side (probably
uncommon), then you could issue read()s of just the frame header and
temporarily splice() the connections for payload-size bytes.
In all other cases though, you're better off issuing larger fixed-size
read()s, and aggregating frames to write() in user-space.

What this probably means is that you'll want to segregate your large-asset
servers from small-asset servers, and you'll want to inform your LB's pool
configuration as to which heuristic is appropriate.

But since you've now segregated your servers anyway, another option is to
have large-asset servers terminate SSL & HTTP/2, and configure a
large-asset pool in tcp-mode, distinguished by proxy IP or SNI.
This seems a whole lot simpler, allows the load balancer to splice(), and
works with SSL.
Received on Wednesday, 25 June 2014 22:49:35 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC