Re: Semantics of multiple 103s in Early Hints

Hi Kazuho,

On Wed, Aug 09, 2017 at 11:03:50AM +0900, Kazuho Oku wrote:
> I think that there can be use-cases for multiple 103s that does not
> involve an intermediary.
>
> For example, an origin server might emit the preload link to the
> global CSS, and then, by checking whether the user has logged onto the
> website emit an additional set of preload links. The web application
> will then process the request and emit the final response.

Oh absolutely! I mean, the intermediaries make it (in my opinion) easier
to understand why multiple, possibly overlapping responses may appear.
Some people tend to think that servers are smart and ought to do everything
consistently fine and will have a harder time understanding why a single
server responses in multiple steps.

> We agree that the speculations made by intermediaries can differ from
> that expressed by the origin.
> 
> What I am trying to point out is that the fact does not lead to the
> conclusion that the union needs to be calculated _by the client_, and
> therefore that the text should not imply that the client is required
> to calculate the union due to the way intermediaries behave.

Well not necessarily by the client, by anyone seeing the 103 responses,
so possibly any intermediary on the way to the client.

> (following text explains an alternative model that does not require
> the client to calculate the union)
> 
> Consider the case where a gateway first speculates that the response
> will contain "Link: </a.css>; rel=preload" and then a server
> speculates that the response will "Link: </b.css>; rel=preload".
> 
> In the current model, a.css will be included in the first 103 response
> and only b.css will be included in the second 103 response.
> 
> However, it is technically possible for the gateway to remember what
> it has sent in the 103 response, and calculate the union of the
> headers when it receives a 103 from upstream. Then, the intermediary
> will emit the union of the headers as the second 103 response.

I 100% agree on this one, it's even aligned with the example I gave of
a cache that would learn them and deliver them early based on previous
requests.

> > We
> > could possibly even suggest that elements that were learned from 103
> > and not yet prefeteched could be aborted if they don't appear in the
> > final response.
> 
> I am opposed to the idea.
> 
> One reason is because adding such rule will be a non-editorial change
> (as discussed above).
> 
> The other reason is that using a binary signal is not ideal. If you
> really want to make weak suggestions in the first 103 response and
> then update it in the following 103s, you should consider adding a
> precedence parameter to the preload elements.

Yep I thought about this yesterday evening as well. While it could be
nice, it would significantly complexify this stuff and would quickly
go out of the initial scope. We'd rather later propose to add new
properties to Link if we find a compelling use case, and 103 will
naturally work.

> Then the client can
> schedule the fetch the preload links based on the precedence, and also
> update the precedence (by calculating the sum of the precedence when
> overlapping preload links are found in multiple 103s).

Let's not start to speculate how this would be calculated, you're putting
your fingers in a dangerous area ;-)

OK so let's get back to the text. My previous concern was a lack of
fluidity in your proposed updated sentence. Now I get a better idea of
your intent, I'll see if I can propose to slightly adapt it to keep the
same spirit without introducing intermediaries into the mix.

Cheers,
Willy

Received on Wednesday, 9 August 2017 03:43:49 UTC