- From: Marijn Kruisselbrink <notifications@github.com>
- Date: Fri, 12 Apr 2019 11:46:49 -0700
- To: whatwg/xhr <xhr@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <whatwg/xhr/issues/244@github.com>
In Chrome, as part of a larger XHR-to-blob refactoring, we shipped a change a while ago that changed the behavior when the resulting blob would be too large to fit in memory/on disk. Before we would download as much as we could fit on disk, sending progress events while doing so, and eventually run out of disk space and at that point error out the request. After the change, we look at the Content-Length header ahead of time (to decide between various strategies of dealing with blobs), and immediately fail the request if we already know we're not going to be able to store the whole request. This seemed safe at the time, since one way or another the request is going to fail anyway, however at the time I wasn't realizing that the difference would be web observable via the (lack of) progress events, and this seem to have actually broken at least one (speed test) website. This isn't an area where behavior is really fully specified (at least the only hint I can feel around behavior for when the result of an XHR is too big to "fit" is the "Allocating an ArrayBuffer object is not guaranteed to succeed." note, but even that only happens after all the bytes have been fetched and somehow magically stored). So as written it seems a conformant implementation needs to always download all the bytes, emit progress events along the way, and can only fail after that. Should failing earlier when we know the overall fetch is not going to succeed be allowed? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/whatwg/xhr/issues/244
Received on Friday, 12 April 2019 18:47:11 UTC