W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1997

Re: Accept-Transfer header field (was HTTP/1.1 Issues: TRAILER_FIELDS)

From: Jeffrey Mogul <mogul@pa.dec.com>
Date: Tue, 18 Nov 97 16:08:27 PST
Message-Id: <9711190008.AA00892@acetes.pa.dec.com>
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
X-Mailing-List: <http-wg@cuckoo.hpl.hp.com> archive/latest/4719
Roy Fielding writes:
    Qvalues are not useful for transfer encodings -- the coding must not
    be lossy, and the vast portion of work is being performed on the server
    side, so the server should be capable of choosing which one is best.
We can quibble about whether these should be called "qvalues" or
something else, but some sort of preference weighting is most
definitely useful.  Generally, there are both costs and benefits
associated with transfer-codings (such as compression) and there
isn't always a fixed tradeoff to be made.  For example, I tried
running a large file through both compress and gzip.  The output
of gzip was about 25% smaller than the output of compress, but
it took almost 4 times as much CPU time to do the compression.

On the other hand, it took about 33% longer, and slightly more memory,
to decompress the output of compress.  So, depending on parameters such
as network bandwidth and client CPU performance (and perhaps client RAM
availability), the server is not necessarily capable of choosing the
most appropriate transfer-coding without some help from the client.

Section 3.9 does say

            "Quality values" is a misnomer, since these values merely
            represent relative degradation in desired quality.

but it also says

	    HTTP content negotiation (section 12) uses short "floating
            point" numbers to indicate the relative importance
            ("weight") of various negotiable parameters.

so I think we should continue to use "qvalues" here, rather than
defining a "pvalue" (preference value) just so we can pretend that
qvalues are only related to lossy encodings.

One more thing: I couldn't find a place in the spec where it
says that transfer-codings "must not be lossy".  In fact, the
TransSend project at UC Berkeley has demonstrated the utility
of lossy codings in some applications, and I'm not sure we should
be banning these.

    Likewise, chunked and identity are always required -- there is no
    reasonable use for refusal based on lack-of-encoding.  Thus, the only
    feature we actually need is the ability to request a given
    transfer-encoding be used.
I disagree.  With respect to "chunked", we could presumably change
	All HTTP/1.1 applications MUST be able to receive and decode
        the "chunked" transfer coding,
	All HTTP/1.1 applications MUST be able to receive and decode
        the "chunked" transfer coding, except for clients that
	explicitly reject this transfer coding using a qvalue of 0
	in an Accept-Transfer request header field.

I'm not necessarily saying that we should do this, but it seems
safe to do so, and it might pay off for implementors of clients
with limited code space.

With respect to "identity", I believe that the argument has already
been made (the last time we debated the coding issue) that a client
might want to suppress the transfer of a large response if it cannot
be compressed first.  (The user might want to choose a different
link instead, for example.)

    Note that we must also include a requirement that chunked be the
    last encoding applied if there is more than one.

Is this really true?  I'm not sure that it would be a major win,
but why not allow a server to apply compression after chunking?
It would probably improve the overall compression ratio.  (I.e.,
you generally get a better ratio when compressing a large file
than when compressing a small prefix of the same file.)  What
goes wrong if we allow this?

Received on Tuesday, 18 November 1997 16:14:32 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:21 UTC