- From: Keith Moore <moore@cs.utk.edu>
- Date: Wed, 28 Nov 2001 10:47:38 -0500
- To: Brian E Carpenter <brian@hursley.ibm.com>
- cc: Keith Moore <moore@cs.utk.edu>, Jim Gettys <jg@pa.dec.com>, Claudio Allocchio <Claudio.Allocchio@garr.it>, Mark Baker <distobj@acm.org>, John Ibbotson <john_ibbotson@uk.ibm.com>, Discuss Apps <discuss@apps.ietf.org>, Richard P King <rpk@us.ibm.com>
> > > Exactly. We can all agree on this. So given that fact, and the fact that > > > people do want to reliably transfer hypertext across unreliable, > > > non-transparent and intermittently connected networks, what should we do? > > > > We should explain why it doesn't make good sense to do these things, > > and provide alternatives that do make sense. > > Wait a minute. I'd love it if the network was reliable, transparent > and connected 100% of the time, but it isn't. We have to deal with that. so how does layering over HTTP help this situation? it certainly doesn't add reliability or transparency, nor does it help fix broken connections. you can provide reliability and transparency over HTTP, but you have to work harder to do this than to provide the same services over IP. even if you use HTTP as a means to get through firewalls, this is a short term fix at best. because the fact that traffic is tunnelled over HTTP doesn't mean that it's any more suitable to pass through the firewall than raw IP traffic. what we need are better means to provide security than our current firewalls, with fine-grained access control that is based on other properties than just the network locations of the participants, with the ability to specify access control centrally (within a domain). but enforcement done by the hosts and servers. firewalls should also be able to examine credentials and provide coarse filtering of traffic to protect the network and to provide security in depth. and the credentials need to be usable in multiple security domains.
Received on Wednesday, 28 November 2001 10:48:16 UTC