- From: Jack Firth <jackhfirth@gmail.com>
- Date: Wed, 31 Jul 2019 22:52:07 -0700
- To: Bin Ni <nibin@quantil.com>
- Cc: Chris Lemmons <alficles@gmail.com>, Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAAXAoJUdJP-WUa8sxt_3L+=09wQb_UUOGq0517ibzYrVoU8aOA@mail.gmail.com>
> > Versus with "alt-svc", the server will serve the content for the current > request, client will finish receiving the response and MAYBE connect to the > new IP for the next request. Could you return a 4xx error with the Alt-Svc header set, and a body message that tells clients they must use the Alt-Svc if they don't want to get a 4xx? Or even a generic 300? It seems reasonable to me for a CDN server to refuse to serve requests it knows will be prohibitively expensive, while providing clients with Alt-Svc as a way to find a less-expensive alternative. On Wed, Jul 31, 2019 at 8:55 PM Bin Ni <nibin@quantil.com> wrote: > Hi Chris, > > There are a few caveats in your reasoning: > 1. It does not have to be some "accept header". It can be the > "User-Agent" header as I mentioned, for example, chrome version > 100. Or > on contract. For example, some CDN customers have full control of the > client software. They just tell the CDN provider that "you can enable the > 312 redirection on all of our domains". > 2. Even when it does rely on some "accept header", there is still a > critical difference from "alt-svc": > In this proposal, the current request will not be served. The client > will get a 312 and forced to reconnect to the new IP, similar to the 30X > redirection. > Versus with "alt-svc", the server will serve the content for the > current request, client will finish receiving the response and MAYBE > connect to the new IP for the next request. > > Hope this is more clear. Please don't hesitate with more questions! > Thanks! > > Bin > > On Wed, Jul 31, 2019 at 6:49 PM Chris Lemmons <alficles@gmail.com> wrote: > >> So, the typical mechanism for that would be an accept header of some >> sort. But if clients are opting into the redirect, then the redirect is >> effectively optional. Any client that would set the accept header can >> instead just support alt-svc today and choose to redirect. >> >> On Wed, Jul 31, 2019 at 7:39 PM Bin Ni <nibin@quantil.com> wrote: >> >>> Hi Chris, >>> >>> Great question! >>> The solution is that the server will only return the new status code 312 >>> if it is sure the client can support it. >>> The information can be from the User-Agent header, or some other request >>> header. >>> Or communicated through some other channel, for example, on a paper >>> contract. >>> >>> Thanks! >>> >>> Bin >>> >>> >>> On Wed, Jul 31, 2019 at 2:36 PM Chris Lemmons <alficles@gmail.com> >>> wrote: >>> >>>> So I have to wonder about the end usefulness from an implementation >>>> perspective. Part of why alt-svc works is that it's optional, so >>>> servers can use them as optimization but everything else still works. >>>> >>>> If you have a new protocol that means basically "alt-svc, but >>>> mandatory", it means the CDN, load balancer, or similar service simply >>>> wouldn't work for any client that didn't understand the new value. >>>> There are a _lot_ of http clients out there. This would be a fairly >>>> high barrier to adoption, which would create a chicken-and-egg problem >>>> that would be tough to solve. >>>> >>>> On Tue, Jul 30, 2019 at 5:53 PM Bin Ni <nibin@quantil.com> wrote: >>>> > >>>> > Hi Amos and All, >>>> > >>>> > Regarding the 30X redirect across different cache servers, it is >>>> already used by many big CDN companies that I know of. >>>> > It is proven to make the system faster without much burden on the >>>> front-end layer which you are concerned. >>>> > But 30X has the limitations I mentioned. This is why I'm proposing >>>> this new type of redirection to address the limitations. >>>> > >>>> https://docs.google.com/document/d/1gtF6Nq3iPe44515BfsU18dAxfCYOvQaekiezK8FEHu0/edit?usp=sharing >>>> > So it is not a question that this proposal will be useful or not. >>>> > I know it will at least be very useful to those CDNs. >>>> > >>>> > Thanks for your comments. >>>> > Please let me know if you have any questions. >>>> > >>>> > Bin >>>> > >>>> > On Tue, Jul 30, 2019 at 12:38 AM Amos Jeffries <squid3@treenet.co.nz> >>>> wrote: >>>> >> >>>> >> On 30/07/19 7:02 am, Bin Ni wrote: >>>> >> > Yes, what we want is a way to force a "deterministic behavior from >>>> the >>>> >> > client", just like all the 30X redirections today. >>>> >> > >>>> >> > Let me give a few more cases in which this can be helpful: >>>> >> > 1. A client in North America is returned a server IP in Europe by >>>> the >>>> >> > DNS. The server then wants to direct the client to another server >>>> in >>>> >> > North America for better performance. >>>> >> > 2. The content of a website is hashed to multiple servers based on >>>> URL. >>>> >> > These multiple servers may not even be in the same datacenter. The >>>> DNS >>>> >> > does not have this information and may return any IP to any query >>>> of the >>>> >> > website's hostname. Each server will calculate the hash for each >>>> >> > request and redirect client to the correct server that has the >>>> content. >>>> >> > This is quite common for CDN. >>>> >> >>>> >> It is common for good reason: efficiency. >>>> >> >>>> >> There is a secondary level of efficiency that comes from the >>>> redirects >>>> >> being actual HTTP 30x redirects. Having large objects at different >>>> URL >>>> >> entirely provides for a different CDN or caching layer closer to the >>>> >> client to provide the large object contents. DNS can be (often is) >>>> >> involved in that layer to provide the closest server IP. >>>> >> >>>> >> As proposed so far your mechanism would flatten this two-tier >>>> structure. >>>> >> Forcing the frontend layer (now only layer) to be involved in >>>> deciding >>>> >> the specific hardware location of individual objects / resources. >>>> >> Making the frontend machinery store more information and do more >>>> work >>>> >> per-request is not going to make the system faster, quite the >>>> opposite. >>>> >> >>>> >> >>>> >> By separating the work into the three layers: frontend LB, cache, and >>>> >> origin. Each CDN layer gets some orders of magnitude increase in >>>> >> performance / capacity: >>>> >> - origin able to handle/generate some few thousand responses per >>>> second, >>>> >> - cache able to re-distribute those as static objects at line speed >>>> for >>>> >> an order or two magnitude more than origins, >>>> >> - frontend LB able to handle millions of the small ~1KB >>>> >> request/response pairs for redirection spreading that high load >>>> across >>>> >> the lower layers. >>>> >> >>>> >> >>>> >> AYJ >>>> >> >>>> > >>>> >>> >>>
Received on Thursday, 1 August 2019 07:16:34 UTC