W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1998

Re: Digest mess

From: John Franks <john@math.nwu.edu>
Date: Sat, 3 Jan 1998 15:27:53 -0600 (CST)
To: Scott Lawrence <lawrence@agranat.com>
Cc: jg@w3.org, paulle@microsoft.com, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, ietf-http-wg@w3.org
Message-Id: <Pine.LNX.3.96.980103134611.2151A-100000@hopf.math.nwu.edu>
On Thu, 18 Dec 1997, Scott Lawrence wrote:

> 
> 
>   Removing the problematic field values from the calculation and
>   adding the original values as attributes are both
>   backward-incompatible changes; the question then becomes which will
>   do more:
>      1) to support authentication and integrity protection
>      2) to encourage wider implementation and use of the feature.
>   I think that with respect to (1) the two alternatives are
>   equivalent; neither ends up really preventing attacks based on cache
>   manipulation, and either is capable of detecting such attacks.  It
>   seems clear to me that making the scheme simpler by removing
>   elements from the calculation is more likely to encourage wider
>   implementation. 
>   

Actually the contrary may be the case.  It seems that the ability to
digest *arbitrary* origin headers, including as yet undefined ones, is
very important to some potential implementers.

Currently the best idea on how to do this is Jeff Mogul's suggestion
that the origin agent take a set of headers it wishes to digest
like

HTTP/1.1 200 OK
Date: Sat, 03 Jan 1998 19:52:37 GMT
Expires: Sun, 04 Jan 1998 19:52:37 GMT
Last-modified: Fri, 25 Jul 1997 15:44:39 GMT
ETag: "33d8c9e7=30845=6c5"
Content-type: text/html
Content-length: 1733

And encode them (including CRLFs) either using base64 or URL-encoding
and put the result in an "origin-headers" field of Authentication-info,
getting something like

origin-headers =
 "SFRUUC8xLjEgMjAwIE9LCkRhdGU6IFNhdCwgMDMgSmFuIDE5OTggMTk6NTI6
  MzcgR01UCkV4cGlyZXM6IFN1biwgMDQgSmFuIDE5OTggMTk6NTI6MzcgR01U
  Ckxhc3QtbW9kaWZpZWQ6IEZyaSwgMjUgSnVsIDE5OTcgMTU6NDQ6MzkgR01U
  CkVUYWc6ICIzM2Q4YzllNz0zMDg0NT02YzUiCkNvbnRlbnQtdHlwZTogdGV4
  dC9odG1sCkNvbnRlbnQtbGVuZ3RoOiAxNzMzCgo="

A few issues come to mind:

1.  URL-encoding is simpler and shorter (I think).  Base64 has
a standard for breaking lines and we will surely need to do that.

2. The client also may send a digest, but it has no Authentication-info
header.  Does it need one?

3. Gzip'ing the headers above and then doing base64 gave only slight
improvement over just base64:  285 bytes vs. 313.

John Franks
john@math.nwu.edu
Received on Saturday, 3 January 1998 13:33:06 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:33:09 EDT