Re: Comments on the HTTP/1.0 draft.

>> What the MIME specs state in this area is irrelevant. MIME is designed to
>> pass through mail gateways. HTTP is not. It is the 8 bit clean restriction
>> that is HTTPs main win over other protocols.
>No way.  FTP is not 8 bit clean?  Finger is not 8 bit clean?
>It is the uniform and portable representation of metadata (i.e. HTTP headers)
>that is HTTP's main win over other protocols.  FTP could be nearly as good if
>there were uniform ways to find out, rather than heuristically guess at,
>things like the last modification time and content-type of files.
>HTTP mostly combines the headers and content-labeling of email/MIME, the
>file-transfer of FTP, and the lightweight request-reply nature of finger.
>Quiz:  Which of these three protocols does not employ canonicalization?

  It's not clear that you're discussing the same thing as everyone else. No
one disputes the need for the request and response headers to use standard
EOL encoding. That's a given and everyone understands that this is the
case. This is a no-brainer because all the headers are generated by clients
and servers and can always be generated correctly.

This discussion is about object-bodies ONLY. Frankly, your continued
arguing for cannonicalization in this area is contrary to A) current
practice, B) common sense, and C) any perceived need on anyone elses' part.
I have yet to hear a factual, supported reason why this must be done or
else HTTP will fail. It isn't done now, everything works great, and nobody
is forced to waste a bunch of CPU cycles to munge text files to keep a few
standards junkies happy.

I apologize if my tone here is unprofessional, but continuing to throw up
obstacles with respect to the subject of tolerant interpretation of line
ends without citing any rationale other than "cannonicalization is a Good
Thing" is wearing thin. Either cite some evidence as to why it's a good
thing or leave it be. (and references to other protocols are of limited
value, because most are batch oriented vs. interactive and none except
gopher experience anywhere near the load of HTTP in terms of transactions
per hour. Performance is THE major issue here, not squeaky clean, overly
restrictive standards definitions.)

Opinions that cannonicalization has no impact on clients or servers and
should be done because it's "cleaner" aren't borne out by fact, at least as
far as the tests I have run are concerned or my experience with about 5000
installed MacHTTP sites indicates. You are asking that current practice be
discarded in favor of an idea that has not been proven to be of any use to
the HTTP community. Pardon me if I am skeptical.

>> This is a character set issue, not a content type issue. If people want to
>> propose that the default characterset interprets CRLF in this manner then
>> fair enough.
>HTTP supports different character sets? :-)

It could support a million. It doesn't matter. The ideal situation is for
HTTP servers to be able to completely ignore the data they are transporting
in object-bodies. They can all do this now. Your approach means that EVERY
server except your Perl version will have to be recoded to examine EVERY
byte of text content they transmit. This is absurd.

Chuck Shotton                             \
Assistant Director, Academic Computing     \   "Shut up and eat your
U. of Texas Health Science Center Houston   \    vegetables!!!"  (713) 794-5650 \

Received on Wednesday, 7 December 1994 11:00:24 UTC