Re: additional XMLDSIG URIs

Hi Brian,

From:  "Brian LaMacchia" <bal@microsoft.com>
Date:  Fri, 20 Apr 2001 09:26:36 -0700
Message-ID:  <BCDB2C3F59F5744EBE37C715D66E779CEAB66A@red-msg-04.redmond.corp.microsoft.com>
To:  "Donald E. Eastlake 3rd" <dee3@torque.pothole.com>
Cc:  <w3c-ietf-xmldsig@w3.org>, <lde008@dms.isg.mot.com>

>> From: Donald E. Eastlake 3rd [mailto:dee3@torque.pothole.com] 
>> Sent: Thursday, April 19, 2001 6:13 AM
>> To: Brian LaMacchia
>> Cc: w3c-ietf-xmldsig@w3.org; lde008@dms.isg.mot.com
>> Subject: Re: additional XMLDSIG URIs 
>>
>> My draft doesn't prohibit there being anything at the URL's. 
>> These additional URIs are, at this instant, not part of the 
>> W3C standard or otherwise in the orbit of the W3C.  The 
>> XMDLSIG standard permits algorithms defined by other 
>> orgnanizations, such as these, and does not require them to 
>> be dereferencable.  Do you want to change the XMLDSIG 
>> standard to require dereferencability?
>
>I don't think XMLDSIG can require that every URI used be dereferencable,
>since the identifiers may refer to private-label algorithms or only have
>meaning within  a particular closed community.  However, I think
>anything we define in the standard (or a companion document) should be
>dereferencable.

Well, it currently isn't a WG document.  In any case, even if it were
an Informaional companon WG document, I would think it should list
useful or likely to be widely used URIs that, perhaps because they
were fomulated and are being promoted by some other organization, are
not de-referencable.

>> But I still don't understand why you assume the suggested 
>> URIs would not be dereferencable.  In fact, I would think 
>> that the IETF would be more stable and better able to keep 
>> material there than you typical current dot.com.  
>
>You put the URLs in the "ietf.arpa" domain.
>
>1) That domain is not resolvable currently, at least none of the root
>servers seem to know about it.  Please point me to a nameserver for the
>ietf.arpa domain.  If the domain isn't active yet, please tell me who's
>going to operate machines to  provide DNS service for that domain, and
>what the policies will be to get content served from within it.  I
>haven't seen a plan for this yet; maybe it's in another draft somewhere?

It should be clear enough from draft-eastlake-fqdn-uri that this
domain isn't live right now and who would have the authority and
responsibility to make it live.

But I have to admit that I don't attach nearly as much importance as
you do to this. I consider dereferencability for documentation to be
mostly a cute hack and dynamic loading of code to be a recipe for
another security disater. Any secure implementation I'm likely to
trust will be doing internal string comparisons against a built in
table.

>2) I never suggested that the URLs for these algorithms reside within
>the domain of  any current corporation, so I do not understand your
>comparison between the IETF and a "typical current dot.com".  I
>specifically said that I expected the URLs to be "owned" by either NIST
>or the W3C.  The W3C has already demonstrated their ability to provide
>these services, and I believe NIST's stability is beyond question.  

I'm not aware of any information that would support the theory that
NIST would be interested in doing this or that the structure of their
web services has stable URI.  In fact, there is evidence to the
contrary from some problems Joseph Reagle just had with links to FIPS
at NIST breaking.

As for dot.com's, I was thinking of the wider case where this document
and other documents had a variety of URIs from a variety of entity's
that wanted to own the algorithms indicated.

The W3C does pretty well but their standard style forces everything
through their servers onto their storage.  Seems to me there might be
some URIs of some prominence where some other eneity wanted to have
change control over the material at the URI.

>> Furthermore, I can't understand why you say they would be 
>> like OIDs.  There is no global database or protocol system 
>> associated with OIDs that I am aware of.  Domain names and 
>> URIs are inherently different in having a global database, 
>> which usually contains physical address pointers, and a 
>> system of protocols associated with them.
>
>I don't see any difference between OIDs and un-dereferencable URLs in
>the ietf.arpa domain unless you are assuming that particular DNS records
>for these subdomains exist and are populated is some particular fashion.
>I didn't see any mention of that in either of your drafts.  Furthermore,
>I believe your statement above is incorrect: URIs in general do not have
>any global database, although URIs in a particular scheme may inherit
>one based on their scheme-specific structure.  For HTTP of course
>there's a domain name in the URL, which allows you to look up a record
>in DNS, but how does that help us specifically if the host doesn't
>exist?  

You are right in that I was unstatedly assuming a FQDN based URI.

>> As for integration testing, I would argue that if there are 
>> no DigestMethod's with a parameter, then the mechanism for 
>> handling parameters to them will never be exercised and is a 
>> lot more likely to be buggy than an implementation of this 
>> trivial truncation feature.
>
>Huh?  The feature exists solely to exercise the implementations, not
>because there's any customer need for it?  How can that possibly be
>adequate justification for inclusion in a standard?

It is a factor to consider and could be adequate justification. A big
fact would be whether you believe there would ever be a sufficinet
need for a DigestMethod with one or more explicity parameters in the
forseeable future. This is something people might differ on.
	  
>> So I can't make any assumptions about implementations but 
>> should make assumptions about cryptographic size/strength 
>> requirement quantization?
>
>Yes, because that's part of our task -- to make security recommendations
>on mininum  cipher strength, modes of operation, etc.  We do this all
>the time.

I didn't say minimum. I said quantization. Why do we have
HMACOutputLength? By your argument, shouldn't we just fix the right
length, or at least force a minimum for HMAC? I was at thinking about
HMAC as an example when I added the parameters.

The SHA sizes are set for the other US Government algorithms. SHA-1
for the original DSS.  SHA-256, 384, 512 for AES key sizes. If you
could guarantee that the entire SHA family of hashes would be used for
these and only these functions, then I'd see no reason to truncate.
However, other algorithms or key sizes to protect could reasonablely
use other hash lengths.  I suppose if someone needed some intermediate
hash length when signing a key can always specify the truncation as
part of the signature algorithm...

>					--bal

Donald

Received on Monday, 23 April 2001 09:13:19 UTC