W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1998

Re: FW: Digest mess

From: David W. Morris <dwm@xpasc.com>
Date: Wed, 31 Dec 1997 13:20:41 -0800 (PST)
To: Ben Laurie <ben@algroup.co.uk>
Cc: Jeffrey Mogul <mogul@pa.dec.com>, "'ietf-http-wg@w3.org'" <ietf-http-wg@w3.org>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
Message-Id: <Pine.GSO.3.96.971231131635.1300L-100000@shell1.aimnet.com>


On Wed, 31 Dec 1997, Ben Laurie wrote:

> > make sense to specify that the field carry the base64 encoding
> > of a compressed form of the headers (using "deflate"?), which
> > would probably result in a net savings over the original header
> > sizes.  But I don't think it's worth another food-fight over this
> > detail.
> 
> It's a shame we have come to this pass, but I'm beginning to think that
> it is the only answer. Base64 is one answer, but wouldn't URL encoding
> also be easy enough and more compact?
> 
> If a cheap and easy to implement compression scheme can be used, then
> why not? (In which case, I'd guess base64 becomes a good idea).

In either case, imbeded LWS must be allowed to follow the spirit of
possibly continued header values and headers which may be too long.

Also, they encoding rule should probably be something like:

1.  Compose the subset of headers to be digested 
2.  Combine into a single string with CR/LF between headers
3.  Encode the whole string.

In otherwords, encode exactly what would have been sent over the wire
from the server.  Then all existing rules for separation, etc. just
apply after decoding.

Dave Morris
Received on Tuesday, 6 January 1998 02:12:52 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:33:09 EDT