- From: Konstantin Darutkin <notifications@github.com>
- Date: Thu, 25 Sep 2025 14:16:57 -0700
- To: whatwg/streams <streams@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <whatwg/streams/issues/1357@github.com>
spalt08 created an issue (whatwg/streams#1357) ### What is the issue with the Streams Standard? Hi, When working with DecompressionStream API I noticed a major difference between Safari, Chrome and Firefox. If you pass compressed binary data with arbitrary padding, the compression stream fails, however: * In Chrome you can actually read chunks of decompressed data before encountering an error: `TypeError: Junk found after end of compressed data.` * In Safari and Firefox behaviour depends on chunking and, in case the last chunk has some extra bytes, output is discarded and an error is triggered: `TypeError: Extra bytes past the end.` (Safari) and `TypeError: Unexpected input after the end of stream` (Firefox) -------- After checking the [spec](https://compression.spec.whatwg.org/#dom-decompressionstream-decompressionstream), it seems like both behaviours are spec-compliant, since: * The spec requires that trailing data after the end of a compressed stream is an error for all three formats. * During per-chunk processing: decompress and enqueue a chunk says "Let buffer be the result of decompressing… If this results in an error, then throw a TypeError." That means an implementation that notices the trailing bytes while handling a single incoming chunk can throw immediately, before enqueuing anything. * At the end (on close): decompress flush and enqueue says "If the end of the compressed input has not been reached, then throw a TypeError." If earlier chunks already produced output and only the final validation fails, an implementation may have already enqueued some decompressed bytes before the final error is thrown. The spec does not mandate detecting the trailing-data error as early as possible versus only on flush, nor does it require suppressing already-enqueued output if a later error occurs. It only mandates that trailing data is an error. So Chrome’s "emit some output, then throw on close" and Safari/Firefox’s "throw without emitting" are both consistent with the current algorithms. -------- An actual problem comes when you start passing chunked input data to the decompression stream. Let's say you feed the stream with two chunks: first one is valid and the second one is partially valid and has arbitrary padding. In this case: * The same JS code will produce different results in Chrome and Safari/Firefox * JS-based alternatives such as `pako` or `fflate` handles such cases without any problems * There's no way to tell if compressed stream is valid or padded before passing the whole payload through the `DecompressionStream` and so far the only reliable option is to try `DecompressionStream` and gracefully fallback to JS-based libraries * Actual wording in the `TypeError` is vendor-specific and not defined by spec (therefore might change in the future), so checking if the compressed data is invalid/corrupted or just padded with arbitrary data is tricky -------- Reproduction script, which is DevTools console-friendly: https://gist.github.com/spalt08/98554a15a3cdf13ca5695c03b35bcd3f Also, bug tickets for [Safari](https://bugs.webkit.org/show_bug.cgi?id=299541) and [Firefox](https://bugzilla.mozilla.org/show_bug.cgi?id=1990921) -- Reply to this email directly or view it on GitHub: https://github.com/whatwg/streams/issues/1357 You are receiving this because you are subscribed to this thread. Message ID: <whatwg/streams/issues/1357@github.com>
Received on Thursday, 25 September 2025 21:17:01 UTC