- From: Chris Lemmons <alficles@gmail.com>
- Date: Wed, 31 Jul 2019 15:36:00 -0600
- To: Bin Ni <nibin@quantil.com>
- Cc: Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
So I have to wonder about the end usefulness from an implementation perspective. Part of why alt-svc works is that it's optional, so servers can use them as optimization but everything else still works. If you have a new protocol that means basically "alt-svc, but mandatory", it means the CDN, load balancer, or similar service simply wouldn't work for any client that didn't understand the new value. There are a _lot_ of http clients out there. This would be a fairly high barrier to adoption, which would create a chicken-and-egg problem that would be tough to solve. On Tue, Jul 30, 2019 at 5:53 PM Bin Ni <nibin@quantil.com> wrote: > > Hi Amos and All, > > Regarding the 30X redirect across different cache servers, it is already used by many big CDN companies that I know of. > It is proven to make the system faster without much burden on the front-end layer which you are concerned. > But 30X has the limitations I mentioned. This is why I'm proposing this new type of redirection to address the limitations. > https://docs.google.com/document/d/1gtF6Nq3iPe44515BfsU18dAxfCYOvQaekiezK8FEHu0/edit?usp=sharing > So it is not a question that this proposal will be useful or not. > I know it will at least be very useful to those CDNs. > > Thanks for your comments. > Please let me know if you have any questions. > > Bin > > On Tue, Jul 30, 2019 at 12:38 AM Amos Jeffries <squid3@treenet.co.nz> wrote: >> >> On 30/07/19 7:02 am, Bin Ni wrote: >> > Yes, what we want is a way to force a "deterministic behavior from the >> > client", just like all the 30X redirections today. >> > >> > Let me give a few more cases in which this can be helpful: >> > 1. A client in North America is returned a server IP in Europe by the >> > DNS. The server then wants to direct the client to another server in >> > North America for better performance. >> > 2. The content of a website is hashed to multiple servers based on URL. >> > These multiple servers may not even be in the same datacenter. The DNS >> > does not have this information and may return any IP to any query of the >> > website's hostname. Each server will calculate the hash for each >> > request and redirect client to the correct server that has the content. >> > This is quite common for CDN. >> >> It is common for good reason: efficiency. >> >> There is a secondary level of efficiency that comes from the redirects >> being actual HTTP 30x redirects. Having large objects at different URL >> entirely provides for a different CDN or caching layer closer to the >> client to provide the large object contents. DNS can be (often is) >> involved in that layer to provide the closest server IP. >> >> As proposed so far your mechanism would flatten this two-tier structure. >> Forcing the frontend layer (now only layer) to be involved in deciding >> the specific hardware location of individual objects / resources. >> Making the frontend machinery store more information and do more work >> per-request is not going to make the system faster, quite the opposite. >> >> >> By separating the work into the three layers: frontend LB, cache, and >> origin. Each CDN layer gets some orders of magnitude increase in >> performance / capacity: >> - origin able to handle/generate some few thousand responses per second, >> - cache able to re-distribute those as static objects at line speed for >> an order or two magnitude more than origins, >> - frontend LB able to handle millions of the small ~1KB >> request/response pairs for redirection spreading that high load across >> the lower layers. >> >> >> AYJ >> >
Received on Wednesday, 31 July 2019 21:36:35 UTC