Re: File API: reading a Blob

On Jul 2, 2014, at 10:28 AM, Anne van Kesteren <annevk@annevk.nl> wrote:

> So what I need is something like this:
> 
>  To read a Blob object /blob/, run these steps:
> 
>  1. Let /s/ be a new body. [FETCH]
> 
>  2. Return /s/, but continue running these steps asynchronously.
> 
>  3. While /blob/'s underlying data stream is not closed, run these
>     substeps:
> 
>     1. Let /bytes/ be the result of reading a chunk from /blob/'s
>        underlying data.
> 
>     2. If /bytes/ is not failure, push /bytes/ to /s/ and set
>        /s/'s transmitted to /bytes/'s length.
> 
>     3. Otherwise, signal some kind of error to /s/ and terminate
>        these steps.




Are you sure this simply cannot be done with the existing read operation, which uses annotated tasks for asynchronous use? The existing read operation serves FileReader and FileReaderSync (and maybe other APIs in FileSystem) and hopefully Fetch too.

For instance, I thought the idea was that within Fetch to read /blob/ we’d do something like: 

1. Let /s/ be a new body.  Return /s/ and perform the rest of the steps async.
2. Perform a read operation [File API] on /blob/.
3. To process read…
4. To process read data, transfer each byte read to /s/ and set /s/’s transmitted to the number of bytes read.

// Chunked byte transfer is possible within the 50ms delta for process read data. We could specify that here better.//

5. To process read EOF ...
6. Otherwise, to process read error with a failure reason on /s/ ….

Why is something like that unworkable and why do we need another variant of the read operation exactly?

— A*

Received on Wednesday, 2 July 2014 17:07:01 UTC