Re: [File API] About Partial Blob Data, XHR and Streams API

Hi Jonas,

Le 1/18/2013 11:14 AM, Jonas Sicking a écrit :
> On Thu, Jan 17, 2013 at 1:56 AM, Cyril Concolato
> <> wrote:
>> Hi all,
>> Reading the File API, it is not clear to me what the behavior is when
>> reading partial Blob data. The spec says:
>> " Partial Blob data is the part of the File or Blob that has been read into
>> memory currently;
>> when processing the read method readAsText, partial Blob data is a DOMString
>> that is incremented as more bytes are loaded (a portion of the total)
>> [ProgressEvents],
>> and when processing readAsArrayBuffer partial Blob data is an ArrayBuffer
>> [TypedArrays] object consisting of the bytes loaded so far (a portion of the
>> total)[ProgressEvents]. "
>> Does this mean that the result object is the same or it is a new object each
>> time there is a progress event ? In the case of a DOMString, it could be the
>> same object incremented but if it is an ArrayBuffer, since it is immutable,
>> it cannot be incremented.
> Strings in JS are immutable. So it will always be a new string.
>> So in the case the final length of the Blob is not
>> known yet (e.g. chunked XHR), result has to be a new object each time. Am I
>> wrong here? If not, could you clarify the spec ?
> The size of a Blob is always known. The .size property never returns
> 'undefined' or 'null' or anything like that. XHR never returns a Blob
> object until it knows what size of Blob object to create.
Thanks for the clarification. So if I understand correctly, the result 
attribute of a FileReader object at each progress event is a different 
object. Is that right?
I also have a few more questions about the use of XHR and FileReader. 
The use case I'm working on is the HTTP Streaming of a live binary data 
(typically video and audio but not only) using chunked XHR.

1) I'm wondering why in progressive mode, does the spec say: |"|partial 
Blob data is an |ArrayBuffer|[TypedArrays 
<>] object consisting 
of the bytes|loaded|so far". Why isn't it the bytes loaded since the 
previous progress event?
In my use case, the binary data resource might have an infinite size, in 
which case the result objects will grow forever.
I looked at the Streams API [1] to see if there was any difference for 
that but I couldn't see any. Reading with the FileReader interface a 
Stream (dynamic length) or a Blob (fixed length) seems to always return 
the whole content.
I also looked at the WHATWG XHR spec [2] and its use of responseType 
"stream" and in this case, it seems that the response attribute is a 
Stream object containing "the fragment of theentity body 
<>of the response 
received so far". So this is not useful either.

2) I'm wondering also why wasn't the design made to enable accessing the 
content of each HTTP chunk directly within the XHR object. In a video 
streaming use case, the server might have carefully created the (video) 
chunks so that an application can use them independently without parsing 
them (typically passing them to the decoder through the MediaSource 
Extension API [3]). With the FileReader approach, the application will 
have to parse each progress event result object to determine meaningful 
chunks for the video decoder.



Cyril Concolato
Maître de Conférences/Associate Professor
Groupe Multimedia/Multimedia Group
Telecom ParisTech
46 rue Barrault
75 013 Paris, France

Received on Friday, 18 January 2013 12:59:45 UTC