- From: Miles Sabin <miles@mistral.co.uk>
- Date: Fri, 02 Feb 2001 10:35:35 +0900
- To: uri@w3.org
Mark Baker wrote, > Aaron Swartz wrote, > > > From what I've seen, a lot of folks are concerned about > > > using the HTTP scheme for namespaces because they don't > > > want their web servers overloaded. > > > > Why would their web servers be overloaded? > > Didn't you hear, Aaron? An HTTP URL has to be resolved before > any meaning can be associated with it. 8-) > > The assumption may stem from the fact that validating XML > parsers have to run off to get the DTD to build the infoset. > Though that's a problem with DTDs and not URLs, I've heard that > given as a reason for not using URLs as public ids. Sadly HTTP URLs _will_ be resolved even when that's unnecessary for them to be meaningful as bare identifiers. And if they are, then servers (particularly ones hosting extremely popular DTDs or namespaces) might well be in trouble. xml-dev's RDDL or anything similar, if widely adopted, would put namespaces more or less on a par with DTDs on the 'XML parsers running off to get stuff' front. These aren't problems with either DTDs or namespace URIs per se. The problem is using a protocol (and, by extension, encoding that protocol in an identifier via a scheme) which doesn't support distribution and replication in a way which is appropriate for this kind of situation. You've mentioned an Akamai-type solutions to this problem. I don't see how that's supposed to help ... could you elaborate? Cheers, Miles -- Miles Sabin InterX Internet Systems Architect 5/6 Glenthorne Mews +44 (0)20 8817 4030 London, W6 0LJ, England msabin@interx.com http://www.interx.com/
Received on Thursday, 1 February 2001 21:30:32 UTC