Re: Range Requests vs Content Codings

On 17/06/2014, Julian Reschke <julian.reschke@gmx.de> wrote:
>
> One way to combine Content Codings and range requests would be to create
> a new range unit, "bbcc" (bytes-before-content-coding). In which case
> the the requested range would be applied to the non-content-coded
> representation, and the content-coding would be applied to the byte range.
>
> Such as:
>
>    GET /test HTTP/1.1
>    Host: example.org
>    Accept-Encoding: gzip
>    Range: bbcc=900000-
>
> This would retrieve the octets starting at position 900000, and apply
> content-coding gzip to the resulting octet sequence.
>

So something like this?

| Content-Type: multipart;separator=foo
|
| --foo
| Content-Type: text/plain
| Content-Range: bbcc=900000-1000000
| Content-Encoding: gzip
| Content-Length: 1234
|
| <standalone gzip stream>
| --foo--

(Paraphrased, because I can't look up the exact terminology on my phone.)

> This also requires that both user agent and origin server understand the
> new range unit, but that appears to be easier to deploy than T-E (which
> requires all intermediaries to play along).
>
> Thoughts?
>

I think this still requires intermediaries to play along. What does a
caching proxy do when this request-response passes through it?
Especially if it doesn't know this range unit? Does the response need
Vary:Range ? Or are they always Cache-Control:no-cache ?

To my mind, this also opens up the idea of a 'bacc' range unit (bytes
after content-coding), as an explicit signal that the client only
wants the range if it's from the content-coded representation. AFAIU
currently it's a bit ambiguous what to do when a request has both A-E
and Range headers. Of course, 'bacc' requires there to be exactly one
coding in the Accept-Encoding header, but it could be useful for
resuming a content-coded download. The same caching issues as with
'bbcc' still apply, though.

-- 
  Matthew Kerwin
  http://matthew.kerwin.net.au/

Received on Tuesday, 17 June 2014 13:15:31 UTC