Re: Some HTTP 2.0 questions

On Wed, Dec 4, 2013 at 11:36 AM, Roberto Peon <grmocg@gmail.com> wrote:

> Correct. The server should seek to use up bandwidth in the order in which
> you've expressed via priorities, but, if unable to do so, will send chunks
> of lower priority stuff in order to keep the pipe full.
>
> The observation is that requiring the server to absolutely follow priority
> order means that you may starve the outgoing pipe, or fail to take
> advantage of something that the server knows that the client doesn't (yet).
>

Exactly what Roberto said. It's useful to additionally note that the
current integer based priority scheme makes it difficult to do this kind of
prioritization. I guess you could allocate 1 priority value per chunk. If
you're writing your own HTTP/2 client, then you have full control. If
you're trying to go through web platform bindings, then this control is not
exposed, and I doubt that it will be (but who knows). In the aforementioned
discussion link that Roberto referenced earlier, we discuss new possible
proposals to do stream dependencies instead of strict priority levels. This
would match your semantic desire perfectly, since you want to express a
reprioritizable pipeline of video chunks, which is something we had in mind
when designing the proposal. If this sort of semantic is desirable to you,
extra voices expressing this would be useful, since every time we want to
add some complexity like this, people justifiably ask "who's going to use
this?"

I'm hoping to get time this week or next week to resubmit the proposal as
an I-D as requested.


>
> -=R
>
>
> On Wed, Dec 4, 2013 at 11:32 AM, Mark Watson <watsonm@netflix.com> wrote:
>
>>
>>
>>
>> On Wed, Dec 4, 2013 at 9:53 AM, William Chan (陈智昌) <willchan@chromium.org
>> > wrote:
>>
>>> On Wed, Dec 4, 2013 at 9:37 AM, Roberto Peon <grmocg@gmail.com> wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Wed, Dec 4, 2013 at 9:29 AM, Mark Watson <watsonm@netflix.com>wrote:
>>>>
>>>>> Thanks for all the rapid responses.
>>>>>
>>>>>  - yes, it was the phrase "sender-advised" that confused me in the
>>>>> definition of PRIORITY. It's not clear that the receiver of a stream can
>>>>> request a change to its priority, but I understand from the responses below
>>>>> that it's intended this is allowed
>>>>>
>>>>
>>>> Yup. The mechanism we have today is not very efficient and requires
>>>> implementation datastructure gymnastics, though, thus current work.
>>>>
>>>>
>>>>>  - modification of the request would indeed be something new at the
>>>>> HTTP layer. But then this is HTTP2.0 ;-) The use-case I am thinking of is
>>>>> to modify the Range, for example if the resource is somehow progressively
>>>>> encoded and the client decides it only needs the lower layers. How would
>>>>> this differ from canceling the old request and making a new one ? The
>>>>> difference is admittedly minor unless the RTT is large: One could cancel
>>>>> the old request, wait for any remaining data to arrive (one RTT) then send
>>>>> the new request (another RTT). Or one could take a guess at where the old
>>>>> request will finish and send both cancellation and new request at the same
>>>>> time. But then depending on how good, or bad, your guess is you either have
>>>>> duplicate data transfer or a gap. I accept the point some real world data
>>>>> is necessary to motivate this use-case.
>>>>>
>>>>> I have one follow-up question: IIUC, the notion of priority is 'soft'
>>>>> - that is, the server can choose to return response data out of priority
>>>>> order. How would you implement 'hard' priority, that is, where response
>>>>> data must be returned in priority order, or, I guess, can only be
>>>>> out-of-order if there is no data available to send from higher-priority
>>>>> responses ?
>>>>>
>>>>
>>> If you don't want to pay a speed hit due to unused bandwidth (no data
>>> available from higher priority sources), you must allow out of order.
>>>
>>>
>>>> I'd have the client make requests with zero stream-flow-control window
>>>> size and open up the windows in whatever order/way I saw fit.
>>>> In most cases, this is probably a losing proposition, latency-wise, but
>>>> it can be done.
>>>> There are certainly valid and interesting usecases for using this
>>>> mechanism to limit the amount of resources used when doing prefetching, for
>>>> instance.
>>>>
>>>
>>> Yeah, Roberto's right here, but just to emphasize...you lose much of the
>>> gain of using HTTP/2 in the first place if you don't allow out of
>>> (priority) order responses. And I would argue that an interoperable HTTP/2
>>> implementation should NOT do this. Because the performance loss here is so
>>> substantial, that were this to become more commonplace, this would
>>> incentivize clients to switch back to multiple connections to get parallel
>>> downloads.
>>>
>>> PS: You may also get suboptimal prioritization the more you eagerly
>>> _push_ into lower level queues rather than lazily _pull_ from higher level
>>> queues. For example, once you push into the kernel socket buffer, the
>>> application can't reorder HTTP/2 frames in that buffer, even though there
>>> may be time left to do so before the bytes get emitted on the wire. There
>>> are some computational tradeoffs due to more context switches, but the
>>> later you delay the actual commit of an HTTP/2 frame to the network, the
>>> better prioritization you get.
>>>
>>
>> ​My use-case is for streaming video, where the request corresponds to
>> chunks of video that are sequential in time. I never want later video data
>> to be sent if there was earlier video data available that could have been
>> sent instead, so I want the priority to be strictly respected by the server
>> in that sense. Am I right that the spec currently would allow but not
>> require a server to behave that way ?
>>
>> ...Mark
>>
>>  ​
>>
>>
>>
>>
>>>
>>>
>>>>
>>>> -=R
>>>>
>>>>
>>>>
>>>>>
>>>>> ...Mark
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Dec 4, 2013 at 9:05 AM, Roberto Peon <grmocg@gmail.com> wrote:
>>>>>
>>>>>> Look for the thread entitled:
>>>>>>
>>>>>> Restarting the discussion on HTTP/2 stream priorities
>>>>>>
>>>>>> (started on Oct 28)
>>>>>>
>>>>>> for further details about how we'd like to see priority changed.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Dec 4, 2013 at 8:49 AM, Roberto Peon <grmocg@gmail.com>wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Dec 4, 2013 at 8:31 AM, Mark Watson <watsonm@netflix.com>wrote:
>>>>>>>
>>>>>>>> Hi everyone,
>>>>>>>>
>>>>>>>> I recently reviewed the HTTP 2.0 draft. There are three things I
>>>>>>>> expected to see that weren't immediately obvious how to achieve. Apologies
>>>>>>>> if there have already been long discussions on these - feel free to point
>>>>>>>> me at the archives if that is the case.
>>>>>>>>
>>>>>>>> (1) Canceling an HTTP request (e.g. if the client decides it no
>>>>>>>> longer needs a requested resource). This is a pain to do with HTTP1.x,
>>>>>>>> requiring the connection to be closed, losing all pipelined requests and
>>>>>>>> incurring a new TCP connection establishment delay. I assume one could
>>>>>>>> close a stream in HTTP2.0, canceling all requests on that stream. Does this
>>>>>>>> mean that for individual control of HTTP requests one must ensure each
>>>>>>>> response is on its own stream ? How does the client ensure that ?
>>>>>>>>
>>>>>>>> A stream is a single request for HTTP/2.
>>>>>>> Cancelling the stream cancels a request.
>>>>>>>
>>>>>>>
>>>>>>>> (2) Receiver modification of stream priority. The client may have
>>>>>>>> (changing) opinions about the relative priority of resources. The
>>>>>>>> specification allows a sender of a stream to set its priority, but I didn't
>>>>>>>> immediately see how the receiver could request priority changes. [Flow
>>>>>>>> control seems to be a slightly different thing].
>>>>>>>>
>>>>>>>>
>>>>>>> This is an open issue and is being worked on.
>>>>>>>
>>>>>>>
>>>>>>>> (3) Modification of HTTP requests. The client may wish to change
>>>>>>>> some fields of an HTTP request. Actually the only one I can think of right
>>>>>>>> now is Range. For example of the client decides it does not need the whole
>>>>>>>> of the originally requested range it would be more efficient to modify the
>>>>>>>> Range than to wait until the required data is received and cancel the
>>>>>>>> request.
>>>>>>>>
>>>>>>>>
>>>>>>> I do't think we've heard about this as a compelling usecase for
>>>>>>> anyone yet. Why would this be significantly better than cancelling the
>>>>>>> previous request and sending another?
>>>>>>>
>>>>>>> -=R
>>>>>>>
>>>>>>> Thanks in advance for any pointers on these. If they are new
>>>>>>>> features requiring more detailed use-cases I can provide those.
>>>>>>>> ...Mark
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Received on Wednesday, 4 December 2013 19:46:24 UTC