Re: HTTP: T-T-T-Talking about MIME Generation

"Daniel W. Connolly" <connolly@hal.com> wrote:

  > Deploying MGET/multipart looks to me like:
  [a list of steps for clients, servers, including...]
  * A few information providers maybe start using it
	  (It's 3 months into the future by now)

  > 
  > Meanwhile, commercial folks are implementing HTTP-NG at lightning
  > speed. Six months from now, all the major vendors are doing
  > interoperable compression and encryption over something like SCP or
  > SSL (not to mention strong authentication).

Sorry, I'm skeptical about this statement.  At least some of the
proposals for MGET/multipart and keep-alive are compatible with what
exists now.  For example, a client could attempt to send an MGET to a
server.  If the server chokes, the client can revert to a series of
regular GETs.

My contrast, deploying HTTP-NG would require significant changes to
clients and servers both, and, despite the transition plan described by
Raggett and Spero, I see HTTP 1.0 and HTTP-NG as fundamentally unable
to interoperate.  (They propose using a proxy to translate.)

So, I think vendors are less likely to switch to HTTP-NG in three
months, a protocol still being experimented with and IMO not quite
ready for prime time, than they are to adopt the MGET stuff.

It may well be that *some* vendors will have compression, encryption,
and session control in six months (some do now, in limited ways), but
I'm equally skeptical that "all the major vendors" will be doing so
"interoperabl[y]" if there's as yet no agreed-to standard upon which to
interoperate.

My tastes (obviously) run to a more evolutionary approach for HTTP.  I'm
unconvinced that the performance problems require a flash cut to a binary
protocol.  Spero has shown that doing multiple transactions over one
connection achieves signficant performance improvements.  His response is
to change HTTP drastically.  Mine is to do so within the current overall
design.

Dave Kristol

Received on Friday, 16 December 1994 06:11:26 UTC