- From: Cyril Concolato <cyril.concolato@telecom-paristech.fr>
- Date: Tue, 22 Jan 2013 15:42:04 +0100
- To: public-webapps@w3.org
- Message-ID: <50FEA53C.2020100@telecom-paristech.fr>
Hi Arun, Le 22/01/2013 15:04, Arun Ranganathan a écrit : > Hi Cyril, > > > 1) I'm wondering why in progressive mode, does the spec say: > |"|partial Blob data is an |ArrayBuffer|[TypedArrays > <http://dev.w3.org/2006/webapi/FileAPI/#TypedArrays>] object > consisting of the bytes|loaded|so far". Why isn't it the bytes loaded > since the previous progress event? > > AR: It is always a new ArrayBuffer. The phraseology "so far" could be > replaced by "bytes loaded since the previous progress event" though > I'm not always sure that will be the case. I understood from Jonas' answer that it was a new ArrayBuffer. What remained unclear was the content of the ArrayBuffer: all the data from the beginning of the read operation (which was problematic), or only the data read since the previous progress event (which I prefer). If, as you say, this is latter option, then please fix the spec. as "so far" is very ambiguous. > > In my use case, the binary data resource might have an infinite > size, in which case the result objects will grow forever. > I looked at the Streams API [1] to see if there was any difference > for that but I couldn't see any. Reading with the FileReader > interface a Stream (dynamic length) or a Blob (fixed length) seems > to always return the whole content. > > > AR: Here, do you mean, you never get a progressevent other than load > and loadend in your tests? No, I meant that the Streams API uses the same approach as the File API: "This method should mimic|FileReader.readAsArrayBuffer()| <http://dev.w3.org/2006/webapi/FileAPI/#readAsArrayBufferSyncSection>" So, I understood reading "so far" that you would get the content of the stream read so far from the beginning at each time, which is practically unusable. If the FileAPI spec is fixed, the Streams API is fixed as well. > Certainly, if you had binary data of infinite size, you'll get .... > several.... result objects. The file API, particularly FileReader, > shouldn't be used in streaming scenarios. I disagree. The File API combined with XHR should be fine for reading (large) files for which the size is known when making the request but still delivered using HTTP streaming approaches. The Streams API and XHR should be fine for the same thing but for (infinite) files for which you don't know the size (chunked transfer to simulate IceCast/ShoutCast). A possible problem is when the apps want to receive the exact chunks created by the server (point 2 in my previous email) which the FileReader API doesn't preserve. Cyril -- Cyril Concolato Maître de Conférences/Associate Professor Groupe Multimedia/Multimedia Group Telecom ParisTech 46 rue Barrault 75 013 Paris, France http://concolato.wp.mines-telecom.fr/
Received on Tuesday, 22 January 2013 14:42:21 UTC