- From: Adam Rice <notifications@github.com>
- Date: Sun, 05 Jan 2020 18:44:11 -0800
- To: w3c/FileAPI <FileAPI@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3c/FileAPI/issues/144/570982732@github.com>
This is under the FileAPI's jurisdiction. It's implementation-defined, and difficult to put tight constraints on without forcing implementations to do inefficient things. I hope Firefox and Chromium arrived at the same size by coincidence rather than reverse-engineering. An implementation that returned 1 byte chunks would clearly be unreasonably inefficient. An implementation which returned the whole blob as a single chunk would be unreasonably unscalable. So it clearly _is_ possible to define some bounds on what is a "reasonable" size. I would recommend using dynamic allocation to store the chunks in wasm if possible, and assume that implementations will behave reasonably. In the standard, it would probably be good to enforce "reasonable" behaviour by saying that no chunk can be >1M in size and no non-terminal chunk can be <512 bytes. Maybe that second constraint can be phrased more carefully to allow for ring-buffer implementations that may occasionally produce small chunks but mostly don't. Alternatively, the standard could be extremely prescriptive and require 65536 byte non-terminal chunks, based in the assumption that any new implementation can be made to comply without too much loss of efficiency. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/w3c/FileAPI/issues/144#issuecomment-570982732
Received on Monday, 6 January 2020 02:44:13 UTC