- From: Anne van Kesteren <annevk@annevk.nl>
- Date: Tue, 5 Aug 2014 15:43:17 +0200
- To: Arun Ranganathan <arun@mozilla.com>
- Cc: Web Applications Working Group WG <public-webapps@w3.org>, Domenic Denicola <domenic@domenicdenicola.com>, Kyle Huey <me@kylehuey.com>
Sorry for the late response Arun. I blame vacation and not being quite sure how we should solve this taking into account the legacy consumers. On Thu, Jul 17, 2014 at 2:58 PM, Arun Ranganathan <arun@mozilla.com> wrote: > There are two questions: > > 1. How should FileReaderSync behave, to solve the majority of use cases? > 2. What is a useful underlying abstraction for spec. authors that can be reused in present APIs like Fetch and future APIs? I'm not sure. >>> We agreed some time ago to not have partial data. >> >> Pointer? I also don't really see how that makes sense given how >> asynchronous read would perform. > > Well, the bug that removed them is: https://www.w3.org/Bugs/Public/show_bug.cgi?id=23158 and dates to last year. > > Problems really include decoding strings according to the encoding determination for incomplete Blobs: > > http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0063.html > > Another thread covered deltas in progress events: > > http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0069.html > > I don’t have pointers to IRC conversations, but: > > 1. Decoding was an issue with *readAsText*. I suppose we could make that method alone be all or nothing. Well a synchronous readAsText would presumably operate on the bytes returned. What that would do is clearly defined. >> Yeah, I now think that we want something even lower-level and build >> the task queuing primitive on top of that. (Basically by observing the >> stream that is being read and queuing tasks as data comes in, similar >> to Fetch. The synchronous case would just wait for the stream to >> complete. > > If I understand you correctly, you mean something that might be two-part (some hand waving below, but …): > > To read a Blob object /blob/, run these steps: > > 1. Let /s/ be a new buffer. > > 2. Return /s/, but continue running these steps asynchronously. > > 3. While /blob/'s underlying data stream is not closed, run these > substeps: > > 1. Let /bytes/ be the result of reading a chunk from /blob/'s > underlying data. > > 2. If /bytes/ is not failure, push /bytes/ to /s/ and set > /s/'s transmitted to /bytes/'s length. > > 3. Otherwise, signal some kind of error to /s/ and terminate > these steps. > > AND > > To read a Blob object with tasks: > > 1. Run the read a Blob algorithm above. > 2. When reading the first /bytes/ queue a task called process read. > 3. When pushing /bytes/ to /s/, queue a task called process read data. > 4. When all /bytes/ are pushed to /s/ queue a task called process read EOF. > 5. If an error condition is signaled queue a task called process error with a failure reason. > > Is “chunk” implementation defined? Right now we assume 1 byte or 50ms. “Chunk” seems a bit hand-wavy and hard to enforce, but… it might be the right approach. Would have to discuss with Domenic, but something like chunks seems to be much closer to how these things actually work. -- http://annevankesteren.nl/
Received on Tuesday, 5 August 2014 13:43:44 UTC