- From: <hallam@etna.ai.mit.edu>
- Date: Thu, 08 Aug 96 13:27:24 -0400
- To: Peter J Churchyard <pjc@trusted.com>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
- Cc: hallam@etna.ai.mit.edu
I's like to take an unpopular position here. I don't think that the HTTP/1.1 spec is too large at all. It may be larger than the average IETF spec but I'm not all that impressed by many IETF specs. Many small specs are small because they stopped being developed once a minimum set of functionality was achieved. Every major application protocol could do with a thorough rewrite, many such projects are in progress. Consider the following :- Telnet, Barely functional terminal protocol. Is claimed to contain a negotiation facility yet that facility does not work. Result, the user is prompted to type in the terminal type when connected. Other manufactures "solve" this problem by not providing command line editing. FTP, The spec for FTP was written before the directory/subdirectory concept was firmly established as the method of file organisation. As a result FTP has somewhat ambiguous connection semantics making implementation of the FTP:// URL somewhat of a trial and error process. SMTP, A spec which was betrayed by implementation. If SMTP mail servers actually implemented the protocol then email would be much more reliable. As it is the email user receives a host of spurious "bounced mail" messages. The obvious solution here is to provide a return identifier for bouncing an email so that these conditions can be handled\ automatically. NNTP, A spec whose definition is determined more by the email source code than by its RFC. Numerous threaded news extensions. This is not to say that the above protocols are wrong or badly designed, nor that the moves to upgrade them are unimportant. The fact is however that a full description of a completed and upgraded NNTP spec would probably rival the HTTP spec in size. Comparing a completed HTTP spec to an incomplete NNTP spec is not a valid comparison. It would be nice to strip down HTTP to its essentials and start again. We don't have that opportunity. We have a deployed protocol and little chance to do a major upgrade. That chance probably went about the time NCSA released Mosaic. I'm somewhat unimpressed by the scheme approach to stripping down a system. You don't make a system simpler by restricting its functionality. You make it more complex to implement systems because each user must reinvent that missing functionality. To make a system simple you must apply more powerful ideas. Scheme looses because it rejects objects in favour of "simplicity". It would have been better to have based the system arround objects and thrown out the old ideas rendered obsolete. It would be nice to apply a layered model to HTTP but again I don't think that that is necessarily appropriate. I don't think that HTTP decomposes in a layered fashion. I think we need to look elsewhere. Phill
Received on Thursday, 8 August 1996 10:25:28 UTC