RE: Priority straw man

Assuming I'm following correctly, Jeff's comments seem to reflect the tree model in Will's draft which had already been removed and isn't part of either proposal currently under discussion.  He would sent two unconnected chains of requests within a group, and expect them to be equally treated within that group's timeslice, regardless of the relative priorities of resources within each chain.  That's not possible with either proposal on the table currently, since as Martin notes the end results of both are equivalent.  The difference is strictly whether the collapsing of dependencies into a strict order happens on the client or server side of the connection.

I'm curious about his comment that he would have to merge priority groups on the back-end, though -- unlike stream IDs, it's not like they're exhausted once used.  As soon as a group is empty, you can reclaim it for more streams and a different user.  Part of the bikeshed on the number of bits will certainly be how many simultaneously-active groups you need available on the back-end, but that's orthogonal to how priorities within a group are expressed.

I've been envisioning the scenario working like this:
 - Server is allocating X% of back-end connection per user; user sends two groups with weights 80% and 20% on the front-end.
 - On the back-end, proxy creates two corresponding groups, weighting them at 80%*X% and 20%*X%.

This does require the proxy to reweight when one of the groups goes away -- and maybe even when the server doesn't have anything to send on one group -- if it wanted to maintain strict resource allocation across front-end connections; to deal with that, you would need groups-of-groups, or the ability to have disjoint lists of resources within a group which is essentially the same thing.  But then when you introduce multiple proxies on a path, the same problem shows up with another layer of indirection.

To me, the key point is that this is advisory -- we need to get the server enough information to make smart decisions *when* there are decisions to be made, but not introduce so much extra state on the server that we slow things down in the aggregate.  It doesn't have to be perfect -- which is good, because it never will be.

-----Original Message-----
From: Martin Thomson [mailto:martin.thomson@gmail.com] 
Sent: Monday, February 10, 2014 10:02 AM
To: Jeff Pinner
Cc: Michael Sweet; Roberto Peon; William Chan (陈智昌); Peter Lepeska; Tatsuhiro Tsujikawa; Osama Mazahir; HTTP Working Group
Subject: Re: Priority straw man

On 8 February 2014 10:22, Jeff Pinner <jpinner@twitter.com> wrote:
> In contrast to dependencies, where the incoming request looks 
> something
> like:
>
> G1 (80% w/ S1 <-- S3) and G2 (20% w/ S5 <-- S7)
>
> I can proxy these approximately to the backend server as something like:
>
> Gn (1/"n"% w S1 <-- S3 & S5 <-- S7)

Unless I misunderstand, that's not entirely correct.  The prioritization schemes that both Will and I described wouldn't permit that.  You would have to choose (S1, S3) <-- (S5, S7) or (S1, S5) <-- (S3, S7) or (S1) <-- (S3, S5) <-- (S7), or something like that.

In all cases, the end states produced by the dependency schemes Will and I wrote up (I don't know what Roberto is talking about here) are exactly equivalent to the scheme Osama described unless there are more layers of dependencies than there are allowed priorities.

The real difference between the schemes is how they handle transitions into the desired end state, particularly when an intermediary needs to translate multiple ways between connections.

It seems to me that a dependency scheme has some advantages when it comes to transitions (the video reprioritization thing, maybe some proxy cases).  It may be that a dependency scheme also has advantages when it comes to translating, but I'm not sure that it's a clear win without full tree-based dependencies.  The cost of these purported advantages is largely borne by the server, which has to deal with the additional complexity of managing dead streams, garbage collection.

I'll also point out that we have to be careful not to raise the bar too high for the server implementers, otherwise we will find that they start ignoring priority more than we would like.  That creates incentives for browsers to hold back requests, apply heuristics, etc...

Received on Monday, 10 February 2014 18:37:30 UTC