Re: HTTP: T-T-T-Talking about MIME Generation

I am at home with the flu, logged in over a serial line, so here's responses
to many comments.

From: Albert-Lunde@nwu.edu (Albert Lunde)

|With so many other deviations from MIME, I suggest we should drop the (rather
|complex) MIME multi-part structure based on boundaries, etc. and only allow
|multi-part messages defined by a Content-Length byte count.

Rather complex?  Hmmm.  One of the big wins from MIME is that it is trivial 
to encode and decode, especially over an 8-bit pipe, where it is a matter of
wrapping and unwrapping.  I don't see, aside from the remote possibility
I don't see, aside from the remote possibility that an intelligently-generated 
boundary-marker could appear in the data.

Working code already exists to perform MIME encoding and decoding, the main
problems in implementing multipart for the web is designing a scheme where we
can pack and unpack the parts and preserve the structure required for this 
application of multipart.

|We'd still need mechanisms in the headers to distingush what
|initial headers applied to the whole transaction, and what
|to a single body, and to indicate the presence of recursion
|(multi-part stuff in a body section) if we allow it.

Assume inheritance from top-level headers unless overridden by body-part
headers.

From: dmk@allegra.att.com (Dave Kristol)
|>"Daniel W. Connolly" <connolly@hal.com> wrote:
|
|  > Meanwhile, commercial folks are implementing HTTP-NG at lightning
|  > speed. Six months from now, all the major vendors are doing
|  > interoperable compression and encryption over something like SCP or
|  > SSL (not to mention strong authentication).
|
|
|Sorry, I'm skeptical about this statement.

Yeah, I have a touch of the flu, and it is especially effecting my sarcasm
meter, so I couldn't tell if Dan was serious about the 6mo time frame or not.
I think the odds of that happening are between that of a photon spooking my 
CPU or finding a random 64-bit base 64 encoded number in random binary data.

From: Albert-Lunde@nwu.edu (Albert Lunde)

|I don't see how a message with a body containing a un-encoded GIF
|can be represented with the MIME RFC.

Easily.  I believe that it is properly demonstrated in my example with the
Content-Transfer-Encoding: 8bit.  MIME was tweaked on 7-bit transport, so we
are used to having to decode base64 data because we mostly see MIME in mail 
(or don't, as the case may be), but it is designed to work as well on an 8-bit 
channel.  Seems like people are seeing a problem here where none exists.  

The MIME spec provides for the unencoded transfer of 8-bit data, HTTP operates
over an 8-bit channel.  What's the problem?

|I think having a multi-part structure that logically maps onto MIME
|is a good idea, but we seem to have made and be making different
|decisions about transport on a byte-by-byte level.

As long as you leave an encoding trail in your normalization scheme for CR/LF,
you should be able to revert a known state by traversing that trail in reverse.

From: john@math.nwu.edu (John Franks)

|It seems to me that this "user's perceived performance" or UPP is going
|to be the dominant consideration for commercial client developers.  If
|they can't match Netscape they simply won't be viable.

Until Netscape's net.chickens come home to net.roost, that is.

Netscape's UPP win comes as it GET's the HTML doc, initiates as many TCP 
connections as images it lacks in its cache.  Then, as soon as it has sizing
info, renders the document with proper-sized holes for the images.  As the 
data arrive for each image, it simultaneously renders each image until the 
transmissions close.  

Under a multipart solution, what is to stop the client from creating a thread 
to conduct the multipart get, and as soon as the HTML doc arrives (transferring 
it as the first body part would be a smart thing), rendering it with 'n' holes, 
and then resizing and filling in the holes as their data arrive--all with a 
single TCP connection, or assuming residency of some inlines on the other 
servers, fewer than 'n' connections.

Do browsers have to know the size of the image before you create a spot for it
on the page at render time?  Couldn't you render the HTML as soon as it
arrives enabling anchors and leaving a standard sized hole, like the [S] icon 
that xmosaic uses, and resize the hole as the image data arrive?  It'd be a 
bit jumpy...unacceptably so?

|I guess the bottom line is that there is not much point in changing
|HTTP unless the resulting protocol can (1) at least match the Netscape
|UPP, and (2) simultaneously significantly improve network efficiency.

From: jim@spyglass.com (Jim Seidman)

|I have to disagree here.  If the issue is how quickly all the text can be
|downloaded, then the fastest way to do this is to send only the text and
|then pass on the images later.

Given the scenario above, sounds like multipart/mixed gets an A in net.
citizenship (points off for sending all those nasty ascii headers) and perhaps 
a A- in UPP (since the images do not all arrive simultaneously), although 
someone has sketched a scheme for multiplexing several images into a 
MIME stream earlier in the year on www-talk, I think, which would do just that.

|In my mind the only defensible reason for simultaneous connections is to
|reduce the round trip time penalty for loading all the pieces of a document.

It's to make the RTT's happen at the same time by parallelizing them instead
of sequentializing them.

Do any of you all out there with gig and gig of log files have any data on 
what percentage of requests for HTML docs come from Netscape?

>From the HTTP perspective, multipart/mixed for representing HTML, is probably
a bridge solution to realize a limited performance gain until we can see 
widespread deployment of next-generation and binary protocols.  But for the 
web as a worldwide information system, there are long-term benefits to 
extending the interchange of HTML and therefore the web beyond just HTTP, but 
to other MIME-compliant systems like NNTP and SMTP.  

-marc

Received on Friday, 16 December 1994 13:16:56 UTC