Re: Content encoding problem...

!When I first started testing HTTP/1.0 clients, almost all of them understood
!Content-Encoding.  Are you saying that they have digressed?  Are you sure
!that the tests were not faulty (i.e., was the server output checked to
!be sure that it was actually sending the correct content-type and
!content-encoding headers)?  Or do the failures only apply when "deflate"
!is used as the Content-Encoding?  Note that most current clients will
!only accept "x-gzip" and "x-compress", if anything.

Dunno; worth careful investigation at this point.

!If the tests are accurate and content-encoding no longer works, then I
!have a more radical suggestion --- drop it entirely.  Content-encoding
!was a terrible extension to begin with and would have been better
!represented as a layered Content-Type, as in
!
!    Content-Type: application/gzip (text/html)
!
!and
!
!    Content-Type: application/secure (application/gzip (text/html))
!
!That would allow HTTP to be much more MIME-compliant than it is currently.
!This is a significant design change, but if the tests are true it means
!that the design considerations of two years ago no longer apply.
!
!However, it would take serious ineptitude on the part of browser
!developers for them not to support Content-Encoding at this late date.
!At the very least, I would expect them to complain about it first,
!and I have had no indication of that over the past few years.

Unfortunately, at this date, such a radical change looks unlikely to
me to be supported by this working group.

In addition, our performance work shows that the fastest, easiest, way
to get a significant performance gain beyond that provided by HTTP/1.1
itself (only 20% or so on dialup lines for our fetch test) is to get
most currently uncompressed datatypes compressed.  From our xplots of
our current tcp dumps, we know that there is NOTHING more to be gained
at the wire level, short of sending fewer bytes.  The savings are much
larger for most documents than making the protocol itself more
compact.  (A more compact protocol should be deployed anyway to reduce
human latency, and efficiency of cache validation; I believe most of
the working group is very familiar with my (almost unprintable)
opinion of HTTP in the first place).

The highest priority should therefore be to get compressed HTML
content widespread on the web as soon as possible, and then worry
about getting HTTP itself compact to reduce human latency (and get a
smaller gain).

Style sheets will also help down load time, but compression helps 
style sheets as well to the same degree on top of style sheet savings.
But they take significant changes to content, and will therefore
take longer to deploy (requiring better tools that we currently have
for mere mortals to use them routinely).

Our tests show that deflate does signficantly better than any modem
compression (you can derive a number from the current tables in our
paper, by taking the length of the HTML document with and without
compression on the PPP line; Henrik is going to get us the number in a
simpler test than working backwards from the table in the paper).  So
whatever we do should be trivial to implement (and our experiences
implementing generic deflate and related decompression in libwww was
it only took a few days to implement, with no significant code size
overhead (if you also intend to support PNG, which uses the same zlib
library), and deployable immediately.  It saved packets on the wire,
and elapsed time, in all circumstances.

Waiting for an HTTP/2.X, as wonderful as that might be, strikes me as
sacrificing the good on the alter of the perfect (do some quick
arithmetic on the number of seconds saved by end users (millions) by
implementing it immediately, by the months required to get a 2.x
deployed, and you'll see how many human lifetimes are involved.)

!Jeff said:
!>although I'd suggest thinking about changing the whole sentence
!>to read something like:
!>    If an Accept-Encoding header is present, and if the server cannot
!>    send a response which is acceptable according to the
!>    Accept-Encoding header, then the server SHOULD send a response
!>    using the default (identity) encoding.
!
!I like this new wording, regardless.

I also like this wording.  I believe it reflects what was intended, and
how reasonable people (like us) first believed it should work, until
we went and read the detailed wording in the specification.  Right now,
the specification (at least) implies that an error should be returned
even though the uncompressed version could be sent instead.

However, I think the wording should still cover the case of being unable to
send the document in unencoded form, so I'd suggest something like:

    If an Accept-Encoding header is present, and if the server cannot
    send a response which is acceptable according to the
    Accept-Encoding header, then the server SHOULD send a response
    using the default (identity) encoding; if the identity encoding
    is not available, then the server SHOULD send an error response 
    with the 406 (Not Acceptable) status code.

Received on Monday, 17 February 1997 07:09:42 UTC