RE: additional XMLDSIG URIs

> -----Original Message-----
> From: Donald E. Eastlake 3rd [mailto:dee3@torque.pothole.com] 
> Sent: Thursday, April 19, 2001 6:13 AM
> To: Brian LaMacchia
> Cc: w3c-ietf-xmldsig@w3.org; lde008@dms.isg.mot.com
> Subject: Re: additional XMLDSIG URIs 
>
> My draft doesn't prohibit there being anything at the URL's. 
> These additional URIs are, at this instant, not part of the 
> W3C standard or otherwise in the orbit of the W3C.  The 
> XMDLSIG standard permits algorithms defined by other 
> orgnanizations, such as these, and does not require them to 
> be dereferencable.  Do you want to change the XMLDSIG 
> standard to require dereferencability?

I don't think XMLDSIG can require that every URI used be dereferencable,
since the identifiers may refer to private-label algorithms or only have
meaning within  a particular cloced community.  However, I think
anything we define in the standard (or a companion document) should be
dereferencable.
 
> But I still don't understand why you assume the suggested 
> URIs would not be dereferencable.  In fact, I would think 
> that the IETF would be more stable and better able to keep 
> material there than you typical current dot.com.  

You put the URLs in the "ietf.arpa" domain.

1) That domain is not resolvable currently, at least none of the root
servers seem to know about it.  Please point me to a nameserver for the
ietf.arpa domain.  If the domain isn't active yet, please tell me who's
going to operate machines to  provide DNS service for that domain, and
what the policies will be to get content served from within it.  I
haven't seen a plan for this yet; maybe it's in another draft somewhere?

2) I never suggested that the URLs for these algorithms reside within
the domain of  any current corporation, so I do not understand your
comparison between the IETF and a "typical current dot.com".  I
specifically said that I expected the URLs to be "owned" by either NIST
or the W3C.  The W3C has already demonstrated their ability to provide
these services, and I believe NIST's stability is beyond question.  

> Furthermore, I can't understand why you say they would be 
> like OIDs.  There is no global database or protocol system 
> associated with OIDs that I am aware of.  Domain names and 
> URIs are inherently different in having a global database, 
> which usually contains physical address pointers, and a 
> system of protocols associated with them.

I don't see any difference between OIDs and un-dereferencable URLs in
the ietf.arpa domain unless you are assuming that particular DNS records
for these subdomains exist and are populated is some particular fashion.
I didn't see any mention of that in either of your drafts.  Furthermore,
I believe your statement above is incorrect: URIs in general do not have
any global database, although URIs in a particular scheme may inherit
one based on their scheme-specific structure.  For HTTP of course
there's a domain name in the URL, which allows you to look up a record
in DNS, but how does that help us specifically if the host doesn't
exist?  
 
> As for integration testing, I would argue that if there are 
> no DigestMethod's with a parameter, then the mechanism for 
> handling parameters to them will never be exercised and is a 
> lot more likely to be buggy than an implementation of this 
> trivial truncation feature.

Huh?  The feature exists solely to exercise the implementations, not
because there's any customer need for it?  How can that possibly be
adequate justification for inclusion in a standard?
 
> So I can't make any assumptions about implementations but 
> should make assumptions about cryptographic size/strength 
> requirement quantization?

Yes, because that's part of our task -- to make security recommendations
on mininum  cipher strength, modes of operation, etc.  We do this all
the time.
 
					--bal

Received on Friday, 20 April 2001 12:31:00 UTC