Re: canonical MIME headers

> > while Ned is correct that there's no need to do this when downgrading
> > from 8bit/binary/q-p to base64 because multipart boundaries inherently
> > cannot be confused with base64, there are re-encoders that can "upgrade"
> > as well as "downgrade", and which use the same code path to convert from
> > one encoding to another regardless of whether the destination is base64.
> > for those re-encoders it's easier to always rewrite boundary markers.
> 
> OK, I understand that.   But, if we have per-part MIC's calculated, why do
> we care whether boundary markers are rewritten or not?

clearly we can write a canonicalization algorithm that is independent of
both the choice of boundary marker and content-transfer-encoding.
(though it's not quite as simple as having per-part MICs)

the relevant questions would seem to be:

- besides re-encoding, what other kinds of munging would we need to 
  accomodate, and what are the effects on the message?

- given that it would be possible to transmit information in
  boundary markers and content-transfer-encoding headers and
  received headers, (as well as whatever else we find)...
  how much leakage potential is acceptable?    

  (I mean, you can also transmit information using alternative
   means of encoding certain constructs in BER. But I gather that it's
   somehow considered acceptable to recast the entire structure into
   DER before evaluating and to ignore this source of leakage?
   Is there some logic that dictates when it's acceptable and when not?)

Keith

Received on Friday, 9 November 2001 15:35:49 UTC