W3C home > Mailing lists > Public > whatwg@whatwg.org > July 2010

[whatwg] More YouTube response

From: Marques Johansson <marques@displague.com>
Date: Tue, 6 Jul 2010 17:19:42 -0400
Message-ID: <AANLkTilL36oil_Kw4nuhs8JIU1qG-5LRItCSsLPf9VnS@mail.gmail.com>
On Tue, Jul 6, 2010 at 4:37 PM, Aryeh Gregor
<Simetrical+w3c at gmail.com<Simetrical%2Bw3c at gmail.com>
> wrote:

> On Tue, Jul 6, 2010 at 10:24 AM, Marques Johansson
> <marques at displague.com> wrote:
> > The benefit to the user is that they could have less open network
> > connections while streaming video from server controlled sites and those
> > sites will have the ability to meter their usage more accurately.
> > Inserting an extra clip at the end is more of a playlist or scripting
> > answer.  Or perhaps a a live re-encoding answer.   I'm looking for a way
> to
> > give the user the raw video file in a metered way.
>
> It sounds like your use-case is very special, and best handled by
> script.  I suggested server-side script -- you could do that today.
> Just cut off the HTTP connection when the user has used up their
> allotted time.  Alternatively, it might be reasonable to have
> client-side scripting for video that's flexible enough to do what you
> want.  But a dedicated declarative feature is just not reasonable for
> such a specific purpose.


I tested cutting off the HTTP connection and browsers didn't handle this.  I
realize I may need to test a deeper sever than a php exit() can provide.  I
have essentially tested this (but not this exactly - filehandles, sessions,
additional code, etc):
<?php
header("HTTP/1.1 206 partial");
header("Accept-Ranges: bytes");
header("Content-Range: bytes 0-999999/1000000");
header("Content-Length: 1000000");  // report 1000k
echo str_repeat(" ", 1000); // return 1k
exit();

and found that browsers do not attempt to refetch the data or continue with
a 206 for the next block.

Shouldn't something like this be be worked into the protocol or the language
instead of having to break the stream at a higher level?

Consider the existing 4xx errors.   The server can tell the client that the
request data was too large (1k? - 413/414) or that a "content-length" is
required (411) but not that the range length is required or too large
(1gb?).

> A 200 response or
> > partial 206 responses that returns less than the full requested range is
> not
> > handled by browsers in a consistent or usable way (for this purpose).
>  Only
> > Chrome will continue to fetch where the previous short 206 response left
> off
> > (request 1-10, server replies 1-5, request 6-10, server replies 6-10).
>  The
> > HTTP spec isn't clear about whether UAs should take this behavior - and
> so
> > they don't.
> > Some UAs request video without sending "Range: bytes 0-".  The server has
> no
> > way to negotiate that the UA (a) must use ranges to complete the request
> or
> > that (b) the range requested is too large, retry will a smaller range.
>
> You don't need to return less than the browser requests, until it
> actually uses up the bandwidth the user has paid for.  Let the browser
> fetch as much of the video as the user wants to view, using
> preload=metadata when that's supported by all browsers.  Every time
> the server sends a new chunk of video to the user, it should deduct
> that much from the user's current balance.  When the user has received
> as much video as he's paid for, just have the script exit without
> sending more output.
>

prefetch="metadata" is the plan - but how far forward will the browser
attempt to buffer? When will the browser stop buffering and start playing? I
think the server/service/html/http side of things should have some say.  I
wouldn't want to see browser X seek 10 seconds ahead while browser Y fetches
60 seconds in advance.

I think buffer="" is a reasonable attribute.  buffer="" being akin to
minbuffer="" I feel that a maxbuffer="" is also reasonable.  Again - this
would be easier than getting HTTP spec changes made.

You don't have to return a proper Range header and expect the browser
> to issue new requests.  Just pretend you're serving the whole
> resource, then cut it off unexpectedly before you've actually returned
> all the content.  The browser will handle this fine, it will just
> treat it as a network error.  A client-side script can then detect the
> error and present nice UI.


As I stated before - this didn't pan out for me - I will happily test other
methods.

Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20100706/0a8b3ff3/attachment.htm>
Received on Tuesday, 6 July 2010 14:19:42 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:24 UTC