Re: canonical MIME headers

> I've just finished processing the "preceding" header so I've got the
> boundary marker and now I'm moving through the content.  I simply
> substitute some standard string for digest purposes for every boundary
> marker as I come across it.

> What am I missing?

Aside from the not-inconsiderable complexity of writing the right sort of
parser for this, I don't know of any problems actually computing the digest.
The problems arise when software attempts to take advantage of this new-found
ability to actually substitute boundary strings since it knows the substitution
won't change the resulting MIC. If your new boundary is longer than the
original you have to worry about line length issues. And even if your new
boundary is shorter or the same size as the original, line wrap in the middle
of a boundary may force you to use a marker containing a space so as to avoid
line length issues that might arise if you remove the fold as part of your
processing.

Sounds pretty complicated to me. And more to the point: What does it gain? My
understanding is that the underlying goal here is to eliminate the requirement
that signed messages be 7bit. That requirement exists because of the need to
downgrade 8bit messages to 7bit in some cases, which currently would break the
signature. We therefore need a signature mechanism that is invariant when
encodings are downgraded. But such downgrading never needs to change the
boundary marker -- base64 never conflicts with an existing marker and
quoted-printable can be done in such a way that boundary marker collisons
cannot happen.

				Ned

P.S. I should also mention that extremely long content-type fields are not a
purely academic concern. Applications exist that routinely generate
content-type fields thousands of characters long.

Received on Friday, 9 November 2001 11:26:54 UTC