[Bug 26465] Algorithm normalization doesn't allow arbitrary operations for AlgorithmIdentifier fields

https://www.w3.org/Bugs/Public/show_bug.cgi?id=26465

--- Comment #5 from Mark Watson <watsonm@netflix.com> ---
(In reply to Ryan Sleevi from comment #4)
> I apologize, but I'm still missing something / still find your definition a
> bit vague, because it seems like you're mixing solution with problem.
> 
> (In reply to Mark Watson from comment #3)
> > Because I cannot have an member in an algorithm identifier that identifies
> > an algorithm to be used with an operation other than 'digest' or the same as
> > the outer algorithm identifier.
> 
> To confirm, your problem is simply that the normalized algorithm of an
> AlgorithmIdentifier is constrained to the operation type of the parent.
> 
> Can you describe an actual algorithm or problem with this? At least with our
> other API concerns, there are real and practical algorithms that demonstrate
> this issue. This seems a bit even more... impractical?

I don't have a non-hypothetical example. But we are talking about extensibility
here, so we should allow for all kinds of extension (within reason). Unless
there is some reason to believe that this form of extension is unlikely to be
needed.

My hypothetical example was a new signature algorithm which needs to be
parameterized by a choice of encryption algorithm.

If one algorithm makes internal use of another algorithm like this, there is no
reason to expect the inner operation to match the outer one. Indeed, the
existing concrete example is of signature algorithms which internally make use
of hash algorithms, where the inner operation (digest) does not match the outer
one (sign).

> 
> > - a container which mirrors the structure of the target type, specifying for
> > each AlgorithmIdentifier member what the operation is to be used to
> > normalize it (would need to support members which are themselves
> > Dictionaries).
> 
> Apologies, but I still cannot make heads nor tails of what you're proposing.

I think you need to read what I said more carefully, then.

> It sounds, which surely can't be correct, that you're proposing duplicating
> the interface definitions.

Indeed, that's not correct. What on earth in what I said makes you think that ?

Suppose the IDL Dictionary for my hypothetical signature algorithm is as in my
example:

Dictionary NewSignatureAlgorithm : AlgorithmIdentifier
{
   AlgorithmIdentifier enc;
}

and suppose I have a value, again as in my example:

{ name : "NewSignature", enc: { name : "AES-GCM" } }

and furthermore, [[supportedAlgorithms]] is equal in part to:

{ "sign" : { "aliases" : ...,
             "algorithms" : { "NewSignature" : NewSignatureAlgorithm } } }

Now, in following the normalization rules, I will look up "sign" and then
"NewSignature" in [[supporedAlgorithms]] and come up with a 'desiredType' of
NewSignatureAlgorithm (step 8 in 20.4.5)

Then I will perform the IDL conversion to that desiredType. Finally, I'll
traverse NewSignatureAlgorithm and discover that the member "enc" has type
AlgorithmIdentifier. In step 12.5 in 20.4.5 the third choice is triggered and I
will: "Set the dictionary member on normalizedAlgorithm with key name key
(="enc") to the result of normalizing an algorithm, with the alg set to
idlValue (={ name : "AES-GCM" }) and the op set to op (="sign").

So, we are constrained to normalizing the "enc" member using operation "sign".

If we want to allow the flexibility for the "enc" member to be normalized using
operation "encrypt", then instead if just looking up 'desiredType' in
[[supportedAlgorithms]] we would want also to be provided with an object like
this:

{ "enc" : "encrypt" }

which specifies the operations to be used for the AlgorithmIdentifier members
of desiredType. 

> 
> > - a container which maps from sub-types of AlgorithmIdentifier to the
> > operation name, so that extension specifications can define that
> > NewHashAlgorithmIdentifier should use "digest" and
> > EncryptAlgorithmIdentifier should use "encrypt" etc.
> 
> I'm not sure the value of "don't monkeypatch" is worth the value here,
> especially when there's no concrete, real-world use case, and it's just an
> academic concern.

Well, as I said, the constraint that the inner operation must be the same as
the outer one makes no sense, especially as in the only concrete example
("sign" vs "digest") they don't match.

If you want to allow future flexibility without going as far as specifying the
inner algorithm normalization rules then in that third option of step 12.5 of
20.4.5 we can just say:

"Set the dictionary member on normalizedAlgorithm with key name key to the
result of normalizing an algorithm, with the alg set to idlValue and the op set
to the value specified in the specification of algName."

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Received on Thursday, 31 July 2014 19:13:03 UTC