W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1995

Re: Content-MD5

From: Ned Freed <NED@innosoft.com>
Date: Sun, 05 Nov 1995 11:33:33 -0800 (PST)
To: Rich Salz <rsalz@osf.org>
Cc: dl@hplyot.obspm.fr, dsr@w3.org, fielding@avron.ICS.UCI.EDU, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, http-wg-request%cuckoo.hpl.hp.com@hplb.hpl.hp.com
> > Why didn't you bring it up on the IETF list or with the IESG?

> Because this is a revision of 1544 which is basically two years old.
> While I think per-protocol headers are the wrong way to do things, it
> makes sense to me that current practice should be standardized.

As I said before, the IETF generally doesn't standardize two different
ways to do the same thing.

> > This is only partially true. The IETF has a fairly firm policy of not
> > duplicating work whenever it can be avoided. As such, given the existance of a
> > standardized, workable scheme that can already be used to perform this
> > function, it is going to be difficult to obtain approval for another, duplicate
> > mechanism. I for one would object to it in my capacity as a member of the
> > applications area directorate.

> The IETF also has a fairly strong history of accepting duplication and letting
> the market decide.  If someone can put forward a draft for a header that
> shows two hash mechanisms, and running code I can't imagine the IETF
> rejecting it.

I regard it as axiomatic that the IETF "accepts duplication". The networking
world duplicates functionality all the time, and as such the IETF has no choice
but to accept this practice.

However, this attitude does not extend to work done within the IETF itself.
Duplication of work within the IETF is actually quite rare, and when it happens
its usually the result of duplication having occurred elsewhere or as a result
of work having been done outside the IETF that the IETF isn't entirely
comfortable with. The acceptance of both ongoing work on Whois++ and on X.500
is a good example of the latter, while acceptance of work on MIME<-->X.400
interoperation is a good example of the former.

You need to examine the record a lot more carefully if you think the IETF
actually condones development and standardization of overlapping protocols.
Look at the SNMPv2 mess, for example -- a possible solution is to have two
different security mechanisms available. This would break the deadlock and
would allow SNMPv2 to progress. Yet this alternative is not even being
considered -- even when failure may in fact lead to the demise of SNMPv2

There is even abundant evidence of this position in the work that's been done
on MIME. Proposals have been made to also standardize some alternatives to
MIME. These were rejected out of hand on the basis that they duplicated
standards-track work. In addition, not many people are aware of the fact that
MIME actually supercedes an extant, *standard* protocol -- RFC1049. This was in
fact a substantive issue at one point, and will require taking action to retire
RFC1049 before MIME progresses to standard.

I could go on and on, but suffice it to say that objections to protocols on
the basis of duplication of function are taken very, very seriously, and
such objections are going to be made should an attempt be made to standardize
some alternative to content-md5 without a clear understanding of how it
interacts with content-md5. I will object myself in my capacity as a member
of the Applications Area Directorate if it comes down to it.

> > > My position at the present time is that the relative speed issues are more

> This is what I disagree with.  I think it is (heck, they) both good enough,
> and it doesn't matter anyway since most use will be off-line.

If it doesn't matter then what are your objections to content-md5?

Received on Sunday, 5 November 1995 11:52:51 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:15 UTC