- From: Mark Nottingham <mnot@mnot.net>
- Date: Thu, 4 Apr 2002 22:06:26 -0800
- To: Keith Moore <moore@cs.utk.edu>
- Cc: www-tag@w3.org
Keith, On Tuesday, April 2, 2002, at 12:05 PM, Keith Moore wrote: > > The intended audience for the document was IETF working groups, and > more generally, other groups or individuals defining protocol standards. > The emphasis here is on *standards*, implying that there's an > expectation > that the protocol will be widely implemented and widely used. The > prohibitions aren't intended to apply to private networks, > enterprise-specific applications, or to bilateral agreeements between > consenting parties. So by extension I would say that the prohibitions > weren't intended to apply to most of the things called "web services", > though in the case of a particular web service that became so popular > that nearly every site or host were expected to provide it - under > those circumstances the prohibitions, and the logic behind them, would > be more applicable. That's very helpful. Unfortunately, people are interpreting the document in a number of ways. Perhaps a reasonable outcome would be for the TAG to generate a short statement to this effect to clarify their understanding of 3205, as it relates to work in the W3C (current and future)? > As for use of port 80: Traffic monitoring by port # is useful, even > if it's imperfect. The same can be said of firewalls that filter > traffic > based on port #. I like to say that firewalls don't actually provide > any security assurance by themselves but they can dramatically reduce > the # of applications that you have to analyze in order to have some > assurance of security. If a wide variety of applications use port 80, > the analysis becomes more difficult - a network admin can no longer > ask him/herself "which machines are allowed to run web servers?" but > instead has to allow for the possibility that each machine can run one > *or more* services over port 80 - each of which needs to be analyzed > separately. Granted that there aren't enough port #s available to > assign one to every single application, but there are plenty of port > #s to assign one to each standard or widely-used service. I fully > agree (and so would everyone on IESG) that sites should not rely on > port filtering to provide security, but there's a difference between > relying on port filtering and using port + address filtering as a way > to restrict the number of application servers which are capable of > accepting input. The problem, of course, is that each URI endpoint on a HTTP server is potentially a new application, with or without SOAP. Whether it uses HTML forms or XML-encapsulated messages is beside the point. Standardizing something that encourages machine messaging over a traditionally human-oriented medium doesn't help, but I think that horse has already bolted; vendors are going to support it whether or not it's standardized. > I admit that "traditional use of HTTP" is imprecise language, and I > want to clarify that it wasn't intended to mean that HTTP or its > uses should not evolve. The net, and most of its protocols, have > been evolving continuously for the past 20+ years, and I don't > see why HTTP should be an exception. By "traditional use" I was > trying to anticipate the needs of network admins who would want > to distinguish a new protocol from HTTP (as they understood it) > for the purpose of filtering and traffic analysis. If HTTP over > port 80 came to be used in so many different ways that a network > administrator couldn't make any assumptions at all about the nature > of the traffic - not even coarse assumptions - then this would be > unfortunate, and IETF doesn't want to encourage things to evolve > in that way. On numerous occaions it's been useful to be able to > make some coarse differentiation of traffic by looking at port #s, > and we don't want to see this functionality lost. The relationship between ports, protocols and services is fuzzy at best. When I brought up this issue, I was less worried about Web Services than about things like the Semantic Web. Without the context that you outline above, the document can be interpreted as quite strict about how to use HTTP. > I hope w3c people don't find this insulting, but I must confess that I > see HTTP is the be-all and end-all of network protocols. (I'm assuming there's a "don't" missing from this) [...] > And yet there are clearly numerous application spaces for which HTTP > would be a poor choice - either because of the nature of HTTP itself > or because of the characteristics of the infrastructure that was > deployed > to support HTTP. For that matter, it's also clear to me that TCP > could be vastly improved or replaced (to be able to piggyback payload, > and probably even security negotiation, on connection setup - and get > much better latency than the typical TCP/SSL/HTTP layered negotation, > along with better bandwidth utilization for short transfers. And these > things have implications for the cost of running services and networks, > and I'd like these to be as low as possible. On the contrary, I think that many if not most people involved would agree with you; HTTP as a Web Services substrate is seen as a necessary bootstrap, to be largely replaced by BEEP or DIME-over-TCP or something else. > But because I see the need for diversity, I don't think of "the web" > as having much to do inherently with HTTP - not even as a negotiation > mechanism, because it's clear to me that if we're going to have a > universal negotiation mechanism, it needs to be much lower overhead > than HTTP. (e.g. it shouldn't require a TCP connection setup - that > adds too much delay). So to me a document that tries to explain some > of the consequences and considerations of using HTTP to support some > application's higher-layer protocol doesn't restrict "the web" in any > way. Very true, if taken with the large grain of contextual salt that you outline above. -- Mark Nottingham http://www.mnot.net/
Received on Friday, 5 April 2002 01:10:01 UTC