- From: Per Buer <perbu@varnish-software.com>
- Date: Tue, 10 Apr 2012 11:23:15 +0200
- To: Nicolas Mailhot <nicolas.mailhot@laposte.net>
- Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Nicholas, list, On Tue, Apr 10, 2012 at 8:56 AM, Nicolas Mailhot <nicolas.mailhot@laposte.net> wrote: > > > > So, if the proxy farm fails to hash incoming requests on source IP or > > target URL then this might happen. > > That breaks load balancing as soon as your network is big enough, with > different parts that get active at different points of the day. Sorry if I seem to miss the point, but why would it break? Are you worried that one point in the farm would get too hot? > > But either of these methods will > > easily help avoid the problem. > > No they won't. > To scale network equipments need to be as stupid as possible, and as much > smarts as possible kept in the endpoints. You're breaking this principle > there. Having some smarts in the network to optimize cache hit rates seems to be a reasonable optimization as long as the amount of state can be kept as low as possible. > And anyway even if your solution was possible, you still get unhappy users > that serial refresh because they're not seeing initial progress in their web > clients If the loadbalancer was balancing on target URLs the request would end up at the same proxy. The proxy should be smart enough to coalesce the request into the ongoing one and this one could feed the user a couple of bytes so at the client understands that there is some progress on the download as some proxies have done for years. -- Per Buer
Received on Tuesday, 10 April 2012 09:24:10 UTC