Re: Comments on draft-chan-http2-stream-dependencies-00

I don't mean to change the topic, since Amos and Roberto are having a
good exchange, but I just wanted to endorse the proposal. The benefits
in terms of page load performance will be big, especially for a
forward proxy that is able to prioritize across resources from
different hosts for objects on the same page.

I like the core ideas of lists of dependent resources representing
each tab, with resources in the list either ordered or unordered. This
does expand out into a tree (with unordered dependent objects as
siblings) but the way the tree is expressed in the draft as a list is
elegant and simplifying, in my opinion.

That all said, any amount that we can further simplify this proposal
without significantly reducing the amount of information the browser
is providing to upstream servers is great.

Thanks,

Peter

On Wed, Jan 15, 2014 at 8:55 PM, Amos Jeffries <squid3@treenet.co.nz> wrote:
> On 2014-01-16 12:44, Roberto Peon wrote:
>>
>> On Wed, Jan 15, 2014 at 2:48 PM, Amos Jeffries <squid3@treenet.co.nz>
>> wrote:
>>
>>> On 2014-01-16 06:43, Martin Thomson wrote:
>>>
>>>> On 14 January 2014 17:22, William Chan (ι™ˆζ™Ίζ˜Œ) wrote:
>>>>
>>> <snip>
>>>
>>>  I don't think that you need the ORDERED bit.  It seems to me like you
>>>>>>
>>>>>> could do "A <- B -- C" just as easily by having C reference A.  The
>>>>>> single chain of antecedent-dependent nodes is something an
>>>>>> implementation could easily do, even if this were the case.  I don't
>>>>>> see the advantage to having ORDERED in the protocol.
>>>>>>
>>>>>
>>>>> Having C reference A brings us back to dependency trees instead of
>>>>> lists.
>>>>>
>>>>
>>>> Not really.  If you send A <- B followed by A <- C, then that
>>>> collapses to A <- B -- C.  You can do that based on the order that the
>>>> priorities are set (or by stream identifier numbers, but that would be
>>>> harder to work out).
>>>>
>>>>  <snip>
>>>
>>>
>>>
>>>  Once you open it up to a possible tree, it gets more complicated. In
>>>>>
>>>>> terms of the complexity tradeoff, I think adding a bit for order is
>>>>> simpler than having to have peers/siblings.
>>>>>
>>>>
>>>> It's not really a tree in the sense that you think.  It's a list of
>>>> lists.  The exact same structure you describe in the draft in fact.
>>>> See:
>>>>
>>>> A
>>>> ^
>>>> B -- C
>>>> ^
>>>> D -- E
>>>>
>>>> The only difference is that in the draft you make statements about
>>>> what implementations do with their internal data structures, which
>>>> isn't really necessary.
>>>>
>>>>
>>> I have been wondering exactly how best such a list tree would be
>>> implemented to prevent some objects at the leaf ends being starved or
>>> stateful interactions between the implementations. In particular how to
>>> avoid/reduce problems with stateless or stream-ID mapping intermediaries.
>>>
>>
>> Starvation is sometimes the intent. Without it pages load more slowly and
>> the user experience suffers.
>>
>>
>>>
>>> In short: I am completely against adding weightings and complex
>>> relationship indicators. Even though the feature is OPTIONAL, jumping
>>> through such hoops is completely unrealistic for middleware. Here is a
>>> simple off-the-cuff implementation example for the simplest A<-B, A<-C,
>>> C<-D type of dependency tree/list.
>>>
>>>
>> I think it is a bit premature to say it is unrealistic? This was designed
>> to solve real-world problems in a real proxy :)
>>
>>
>>> If each stream is assigned a single depends-on parent (which can be
>>> changed dynamically with PRIORITY). It could be implemented as a single
>>> list in stream-ID order with but one property indicating the depended-on
>>> stream. Scanning over the list for scheduling frame writes could
>>> recursively try to write available frames for the depended-on stream
>>> before
>>> writing the current stream frames, then moving on to the next listed.
>>> Garbage-collection would simply be removing depends-on information for
>>> already completed streams, effectively raising the child streams all up a
>>> level of priority.
>>>
>>
>> Note that you're proposing a tree, not a list, unless you additionally
>> impose a requirement that any parent must have only one child (which
>> you're
>> not doing in your example).
>> The list-of-lists or the tree-of-deps ends up being essentially the same
>> thing imho. I'd implement the list-of-lists as a tree anyway since it
>> would
>> probably be the most space-and-time efficient implementation.
>>
>>
>>> Simple, fast and quite memory efficient. There are also a few very useful
>>> emergent properties from this:
>>>
>>> 1) Default priority (no dependencies or implementations ignoring this
>>> mechanism) would be to deliver frames for the lowest stream-ID first.
>>>  a) HTTP/1.1 ordering maps cleanly to the 2.0 default priority for
>>> delivery.
>>>  b) hops without the feature implemented mostly act as if default
>>> priority
>>> were being used.
>>>
>>
>> We know that the 'priority equals request sequence #' thing is suboptimal
>> in terms of latency and user experience, so that isn't an improvement :/
>>
>
> Sure. I don't think Id support a feature where the case where it was NOT
> implemented was better than it's default. The point here is that its default
> is a non-change to regular behaviour, not better but no worse either.
>
>
>>
>>> 2) all streams get at least one shot at delivering a frame on the
>>> connection each cycle.
>>>  a) preventing any hanging completely.
>>>  b) allowing implementations to limit how much they send (ie DATA) on one
>>> write, secure in the guarantee that high priority stream will get more
>>> (and
>>> earlier) chances to write than others relative to how many streams depend
>>> on it.
>>>
>>>
>> We played with this waaaay back in the early spdy days. It was
>> disadvantageous then, but it is probably worth trying it again.
>> The assumption at the time, given that layout time was a significant
>> percentage of display cost (latency), was that perhaps giving the layout
>> engine more information about the sizes of images where the markup didn't
>> make the size explicit would speed things up. It didn't help.
>>
>> In the proxy case, you'll want to have relative priorities active within
>> the context of a single user's session, and then you'll want a different
>> set of priorities between different clients.
>> That way no *client* gets starved, though a client may be starving itself
>> of a resource by constantly requesting higher priority items without
>> changing the priority of the low-priority item.
>> Clients that don't do prioritization would lack information about relative
>> priorities within their session, and thus the server would probably just
>> serve them in order-received.
>
>
> In the proxy case the server is not necessarily aware of which requests are
> for what client. All requests are coming from the proxy and sending such
> info adds complexity.
> A mechanism where all requests without pending dependencies are of equal
> top-level priority and treating them all equally prevents a proxy starving
> some client(s) simply by being there. And makes in unnecessary for the proxy
> to artificially create dependencies to avoid that starvation.
>
>
>
>>
>>> 3) groups of requests depending on each other (ie from the same client)
>>> are served relatively close to each other,
>>>  a) increasing the likelihood of shared headers compressing the traffic
>>> size, and
>>>  b) allowing any middleware caches to make better use of temporal
>>> locality
>>> for relatedness of resources.
>>>
>>
>> While the results you state are certainly advantageous, forcing the
>> grouping of requests can be disadvantageous, depending on one's aims
>> (generally in terms of client experience). So I see this as both good and
>> bad.
>>
>>
>>>
>>> 4) priority delivery is completely up to the sender,
>>>  a) no need for any receiver implementation complexity,
>>>  b) no END_STREAM_ACK management
>>>
>>>
>> What complexity are you thinking about here?
>
>
> All of the state management receiver has to do to send accurate ACK frames,
> weights relative to everything else and how different requests are related
> to each other.
>  The render engine in browsers may have that information, but I dare say few
> other client implementations have such a rich base of information to work
> from.
>
>
>
>>
>>> 5) priority of frames self-corrects to a certain degree after relaying
>>> over a hop without the mechanism implemented.
>>>
>>
>> I don't understand this part.
>
>
> In the case(s) where there are multiple hops and one (other than the server)
> does not implement the PRIORITY mechanism in full the receiving client it is
> delivering to can perform frame re-ordering in a small degree simply by
> buffering frames. The buffered frames will be drained to the client
> according to the dependencies known at that hop - thus a small amount of
> re-ordering to correct any un-ordering done by its upstream.
>
>
>
>>>
>>> 6) any buffered streams awaiting write or being processed are immediately
>>> affected by the PRIORITY changes of all other streams.
>>>  a) no need for middleware complexity juggling relationships
>>>
>>
>> So, when client A requests a number of things, then changes their mind
>> about wanting them right now, how does that impact client B when both are
>> going through proxy C?
>
>
> How would client A have submitted that information about client B to the
> proxy and server anyway?
>
> Within the restrictions of the stream ID order requirements the proxy may
> send a PRIORITY update upstream if it wishes making the client A requests
> depend on the client B ones.
>
>
>
>
>>> 7) robust error handling with broken middleware improperly adjusting
>>> PRIORITY frame stream-IDs when it changes transferred stream-IDs.
>>>  * The feature otherwise assumes that stream-IDs are identical end-to-end
>>> (a false assumption with middleware).
>>>
>>
>> The prioritization proposal doesn't assume this at all. It assumes,
>> rather,
>> that the proxy must maintain a mapping from client-side-stream-id to
>> server-side-stream-id.
>> This has to be the case, else things just wouldn't work properly.
>>
>
> Unless I missed something this requirement is not stated in the draft: that
> the proxy needs to change the records listed inside PRIORITY or the HEADER
> field proposed. Thus any middleware not implementing this feature will
> ignore those records when relaying the frames. Thus end-to-end stream IDs
> are mandatory/assumed in this draft.
>
>
>>
>>>
>>> 8) a streams weight is a product of how many other streams are depending
>>> on it.
>>>  a) depended-on streams are likely first in the list, so naturally will
>>> get more frames delivered without weightings or explicit rate-limiting
>>> later streams.
>>>  b) no need for middleware complexity juggling and balancing weights from
>>> N servers with M clients.
>>>  c) no need for server complexity determining relative weight importance
>>> to clients.
>>>  d) no risk of middleware being tricked into turning a small received
>>> PRIORITY change into a massive set of emitted PRIORITY side-effect
>>> changes
>>> (or worse given weightings, these could trigger PRIORITY changes to other
>>> client<->server traffic transactions).
>>
>>
>>
>> This doesn't easily allow a proxy to weight clients equally (i.e. fair
>> queuing across clients), which is oftentimes explicitly what is desired. A
>> client with 1000 outstanding requests shouldn't have a 1000X advantage
>> over
>> one which wants only one, simply because it made more requests. That is
>> backwards from how many would like it done much of the time, and would
>> perform extremely poorly under DoS circumstances.
>
>
> We seem to be envisaging different scopes here. I have been looking at this
> list/tree as a per-connection set of requests to prioritise between
> themselves. Your statement suggests that you are considering a single master
> list of all requests served up by the implementation.
>
> By all means weight between connections. But that is absolutely something
> that should be decided local to the implementation rather than imposed by
> any single client or server of it.
>
> Amos
>

Received on Thursday, 16 January 2014 15:57:15 UTC