Re: Streams and Blobs

On Wed, Feb 27, 2013 at 10:39 AM, Aaron Colwell <acolwell@chromium.org>wrote:

>  - I would like to stream realtime TV. Pausing shouldn't be a problem
>>> because I don't rely on POST requests and I would just buffer up to a
>>> certain limit.
>>
>>   - Another use case that comes to mind is starting to watch a video
>>> file before it is fully downloaded.
>>>
>>
>> You don't need XHR or Stream for this.  Just point video @src at the
>> stream.
>>
>
> In the Media Source Extensions context it is valuable to be able to
> consume the data as it arrives instead of waiting for the whole transfer to
> complete. The normal use case for MSE involves chunks of media several
> seconds long and it is important to be able to process the data as it
> arrives to help minimize startup time and media prefetching.
>

(This reply doesn't seem related to the text you're quoting.  Tillmann said
he wants to be able to stream TV.  You don't need any of this to do that;
<video> can do it already.)

Sending data to MSE, or any other API, doesn't mean it has to be done with
XHR.  Simply handing over a URL is much simpler.  Why do you want to go
through XHR in any of these cases?

 On Wed, Feb 27, 2013 at 7:24 AM, Glenn Maynard <glenn@zewt.org> wrote:
>
>> These are separate.  We're talking about taking the result of an XHR and
>> handing it off to a native API like <video>.  There's a separate discussion
>> for incremental reads in JavaScript, which doesn't involve Stream at all
>> (search the thread for clearResponse for my rough proposal).
>>
>
> Why do these need to be thought about differently? It seems to me that
> Stream simply represents a non-seekable stream of bytes. It seems
> convenient to be able to hand this object to MSE, video, or any other
> object native or otherwise that wants to consume such a stream. If
> JavaScript needs access to the bytes then something like the StreamReader
> seems reasonable to me just like FileReader seems reasonable for accessing
> Blob data.
>

The whole initial point of this thread is that the Streams spec is a lot of
API for streaming to script.  In response, the only argument made for
Stream is for handing streams off to native (video or anything else).  If
that can't be justified with use cases, then we're back where we
started--if the only use cases we have are for streaming to script, we
don't need Stream for that.  All we need to have to support that is one new
method (and maybe another property, depending on web-compatibility).

"It seems convenient" isn't good enough to justify adding something to the
web, if there's a much simpler, more reliable way to do the same thing.

I don't see why the difference between POST or GET usage matters. The XHR
> is just being used to fetch data. I don't think we should bake any
> assumptions into which methods people use.
>

The difference, more precisely, is between using XHR to initiate a fetch
and just handing in a URL for the resource.  ("POST" vs. "GET" is a mild
oversimplification of that, since POST is one thing you can't do by passing
in a URL.)

There's a huge, fundamental difference between a URL and an XHR-provided
POST.  If you have a simple URL, eg. <video src=url>, the browser can open,
close and seek the URL in any way it needs to.  The browser can handle
pausing, seeking, and recovering interrupted streams, completely
automatically.  If you have a POST initiated from XHR, the browser can't do
any of that and it all gets pushed onto the developer.  (Even if it's a
GET, it's not clear that the browser could restart it if it comes from XHR.)

On Wed, Feb 27, 2013 at 10:54 AM, Aaron Colwell <acolwell@chromium.org>
 wrote:

> That is not how I was assuming it would work. I assumed it would keep
> reading & buffer the data just like a normal XHR would do in the other
> modes.
>

I think this may be a more complicated model that will be harder to
precisely define.  But, I don't think the need for the feature has been
established, so for now I'll focus on that part of the discussion.

 It's not just pausing, it's resuming the stream after the TCP connection
>> is closed for any reason, like a network hiccup.  Everyone should want to
>> get that right, not just people who want to support pausing.  This is a
>> problem for every application using this API, and it seems tricky to get
>> right, which in a web API suggests there's something wrong with the API
>> itself.  Handling seeking seems even harder.  If people use GET requests,
>> and don't interject XHR in at all, the whole category of problems goes away.
>>
>
> I don't expect XHR to have to deal with pausing. In most cases where the
> video tag would use Stream it would be for a live stream where pausing
> isn't an option.
>

But again: what cases are those?  Why would you use XHR, create a Stream
and pass the Stream to another API, instead of just giving the API the URL
in the first place?

 Even if it was, since all the video tag has is a non-seekable stream of
> bytes, it would have to either do its own caching or just give up and
> signal an error. It can't make any assumptions about the request because it
> doesn't know where the stream of bytes are coming from. Here we are only
> talking about XHR, but it could be from JavaScript, a WebSocket, or
> something else. The application knows that it is restricting the video tag
> in this way when it chooses to pass in a Stream. If pausing needs to be
> supported then Stream shouldn't be passed to the tag directly. Something
> like MSE or a standard video.src = URL should be used.
>

I'm simply asking: what use cases there are for creating a stream with XHR
and handing the stream off to another API, that can't be done much more
simply by handing a URL to the other API in the first place?

-- 
Glenn Maynard

Received on Thursday, 28 February 2013 01:26:46 UTC