Re: #445: Transfer-codings

On 7 April 2014 15:04, Roberto Peon <grmocg@gmail.com> wrote:

>
> I still don't think compression at the protocol/stream layer makes sense.
> In my experience, it never worked well in our (SPDY) experiments: it added
> complexity, pened the door for lots of DoS vulnerabilities, increased
> memory requirements, increased CPU requirements, and rarely helped w.r.t.
> bandwidth for
> 
> well-constructed sites which compressed their resources.
> 
>
> The cost/benefit here is extremely dubious.
>
>
As I've said, I'm happy to not block HTTP/2 on this, if we can address it
in HTTP/3, and if H3 isn't going to be another 15 years down the track.

But I have to respond to one particular type of comment that keeps coming
up: "*well-constructed sites which compressed their resources.*"

That's a big value judgement on what constitutes good site design. Yes, in
lots of cases it makes sense to compress your resources and have multiple
representations, especially for static resources; but what about the sites
that aren't like that? Why is it bad site design to have a big resource
that can be accessed with ranges? If the answer is because doing so would
require TE in order to have compression, then it's a tautology (TE is only
needed by bad sites; those sites are bad because they need TE). If it's
because caches don't handle that properly, then it's a chicken-and-egg
problem. The only reasons I can think of for calling them bad are either
circular, or "I don't like them." Is there a real reason?

Why should I make my web API use "?start=N&end=M" when I could use "Range:
x-records=N-M" ?


-- 
  Matthew Kerwin
  http://matthew.kerwin.net.au/

Received on Monday, 7 April 2014 05:29:43 UTC