Re: Content encoding problem...

Just a summary response ...

Jim said:
>Our performance work makes it pretty clear we should straighten this
>out somehow, as it can really help low bandwidth users significantly (and
>nothing else other than style sheets does as much).  Our tests showed
>that the deflate side is very very fast, and it would be a good optimiztion
>if HTML documents were routinely sent in compressed form.  (We'll try

Apache is already capable of optionally providing documents in compressed
form using the existing content negotiation facilities.  The protocol
does not need to change for that to work.

When I first started testing HTTP/1.0 clients, almost all of them understood
Content-Encoding.  Are you saying that they have digressed?  Are you sure
that the tests were not faulty (i.e., was the server output checked to
be sure that it was actually sending the correct content-type and
content-encoding headers)?  Or do the failures only apply when "deflate"
is used as the Content-Encoding?  Note that most current clients will
only accept "x-gzip" and "x-compress", if anything.

If the tests are accurate and content-encoding no longer works, then I
have a more radical suggestion --- drop it entirely.  Content-encoding
was a terrible extension to begin with and would have been better
represented as a layered Content-Type, as in

    Content-Type: application/gzip (text/html)

and

    Content-Type: application/secure (application/gzip (text/html))

That would allow HTTP to be much more MIME-compliant than it is currently.
This is a significant design change, but if the tests are true it means
that the design considerations of two years ago no longer apply.

However, it would take serious ineptitude on the part of browser
developers for them not to support Content-Encoding at this late date.
At the very least, I would expect them to complain about it first,
and I have had no indication of that over the past few years.

Jeff said:
>although I'd suggest thinking about changing the whole sentence
>to read something like:
>    If an Accept-Encoding header is present, and if the server cannot
>    send a response which is acceptable according to the
>    Accept-Encoding header, then the server SHOULD send a response
>    using the default (identity) encoding.

I like this new wording, regardless.

Henrik suggested:
>What if we said that:
>
>"HTTP/1.1 servers or proxies MUST not send any content-encodings other than
>"gzip" and "compress" to a HTTP/1.0 client unless the client explicitly
>accepts it using an "Accept-Encoding" header."

No.  Content-Encoding is a property of the resource (i.e., only the origin
server is capable of adding or removing it on the server-side, and only
the user agent is capable of removing it on the client-side).  The protocol
should not dictate the nature of a resource and under what conditions the
server can send an otherwise valid HTTP entity.  The protocol must remain
independent of the payload.

Transfer-Encoding, on the other hand, represents HTTP-level encodings.
If we want to support HTTP-level compression, it must be done at that
level.  However, I would rather see work being done on HTTP/2.x, wherein
we could define a tokenized message format which is more efficient than
just body compression and would result in no worse incompatibilities with
existing software than adding body compression.

.....Roy

Received on Friday, 14 February 1997 21:01:14 UTC