RE: An Xpath-based Solution

Hi Rich,

What I was saying is that minutes of the latest telecon indicated that no
syntax change would be made such that core behavior would NOT completely
validate the signature in the example I gave below.

It is not me personally but rather the WG majority that are proposing that
validation of the Manifest reference be passed off to the application.  At
one point I proposed adding a Verify="Yes" to Reference (or something like
that) to indicate that core behavior should validate the Reference. The
opposition to the idea was steep for reasons that weren't really clearly
spelled out in the telecon.  In fact, the only good argument against the
Verify attribute, stated by Don, was that a true validation (in a court case
for example) was likely to verify 'everything' signed by the signer.

At this point, it became clear to me that the real problem here is that the
default should be to verify 'everything' that the signer signed, and this
default has been hi-jacked by the desire to create optimizations for
intermediate processing steps in certain applications like IOTP.

So here's the question:  Why has it become the default that we verify less
than what the signer signed?  I ask because it seems that the current design
favors optimizations of certain applications to the detriment of other
applications that don't work at all with core verification behavior.

It seems to me that if an application wants to optimize by not always
verifying 'everything', then that is the business of the application, and
the application's designers should be put to the task of defining when and
how much verification they will do.  I realize that this may be a hard pill
to swallow because it requires looking at signatures in a different
perspective than was represented in the Brown draft (which was very greatly
influenced by protocol message signing needs like those in IOTP).

BTW, some have said that we are venturing too far into application land.
Firstly, this argument seems to be whipped out only when it is convenient.
For example, IDREF is a mistake but was added so that certain applications
could function without using an XPath transform.  So there is only sometimes
a hesitation to change the syntax, and I am not convinced that this
hesitation is being fairly applied to all scenarios.  Secondly, I think that
some are thinking too heavily about core behavior being restricted to
signing a preconstructed bucket of bits, and this seems to be forgetting
that a very large part of our mandate is to robustly sign XML.  The bucket
of bits approach assumes that a message M is preconstructed for core and
ready for hashing, whereas in signing XML, the ability to precisely define
how the message M is constructed MUST be part of the process.

To be honest, the inclusion of XPath transforms has given us a good
mechanism for flexible construction of M, except that almost every time
someone in the WG finds they need it, they try to invent alternate syntax
(like IDREF) or semantics (like automatic shaving of start and end tags, and
base-64 decoding)-- all to avoid the inevitable.

Finally, I also don't find it to be a satisfying argument that we shouldn't
make changes because vendors are already implementing this.  It is a working
draft, and that is the risk one takes when implementing a working draft.
The process of mulling through the details until we get the right solution
should NOT be encumbered by vendors who've jumped the gun.  They will have
to make changes, but they are still ahead of those who did not start at all.
Besides, as a developer I'm quite well aware of how little actual code
change is necessary for what I'm talking about (e.g. far less effort than it
took to write this email).

John Boyer
Software Development Manager
UWI.Com -- The Internet Forms Company


-----Original Message-----
From: w3c-ietf-xmldsig-request@w3.org
[mailto:w3c-ietf-xmldsig-request@w3.org]On Behalf Of
rhimes@nmcourt.fed.us
Sent: Friday, December 17, 1999 11:03 AM
To: w3c-ietf-xmldsig@w3.org
Subject: Re:An Xpath-based Solution



Thanks, John.  This was the approach I preferred.  I'm still confused as to
whether core behavior would validate the reference, or if you are proposing
that
validation (of the reference) be passed off to the application.  If it is
part
of core behavior, I don't see the difficulty in moving the document
(internal to
external, external to internal, or external to external).  Such difficulty
was
implied in the telecon.

Re your statement in the previous message:
"Basically, the core signature verifies.  If your app took the signature
apart, then presumably it can put it back together again before trying to
verify it with core behavior.  Or, your app could support manifest
validation."

The first part of this paragraph doesn't appear to refer to your proposed
approach.  I read this as saying that because we are using a manifest (in
your
procedure below), manifest support is pushed off to the application.  Thus,
core
behavior does not validate the referenced document to the hash.  If true, I
disagree, and strongly believe that validation of the referenced document
(to
the hash) should be part of core behavior in your proposed approach.  If my
concern is based on a misunderstanding, I'd appreciate a clarification.

Thanks,
Rich


____________________Reply Separator____________________
Subject:    An Xpath-based Solution
Author: <w3c-ietf-xmldsig@w3.org>
Date:       12/16/99 9:16 PM

The example below will eventually become part of the scenarios document, but
Joseph requested it be posted in today's telecon.

Let's do the complete example of your scenario, which includes the xpath you
asked about above.  You have a SignedInfo that contains an ObjectReference
O_1.  The IDREF of O_1 indicates a Manifest M.  In the Manifest, there will
be an ObjectReference O_2 whose IDREF indicates an element X, where the
character content of X is the base-64 encoded PDF document of interest.  The
transforms T_2 in O_2 include a base-64 transform *after* an Xpath of
"string(text())" (note that child:: is the default so I've taken it out of
the example). The transforms T_1 of ObjectReference O_1 (the one in
SignedInfo) must take as input the Manifest M, and yield as output (M minus
(T_2 + IDREF + Location))-- but only if T_2 is exactly as described above.
T_1 will contain the specific description of the T_2 that can be omitted,
not just a statement saying that all transforms can be thrown out.

<Signature>
<SignedInfo>
<ObjectReference IDREF="M">
<Transforms> <!-- This is T_1 -->
<Transform Algorithm="&xpath;">
descendant::node()
[
        not(self::Location and parent::ObjectReference) and
        not(self::IDREF and parent::ObjectReference) and
        not(self::Transform[@Algorithm="&base64;"]) and
        not(self::Transform[@Algorithm="&xpath;" and
text()="string(text())"])
]
</Transform>
..
</ObjectReference>
..
</SignedInfo>
..
</Signature>

<Manifest Id="M">
<ObjectReference IDREF="X">
<Transforms> <!-- This is T_2 -->
<Transform Algorithm="&xpath;">string(text())</Transform>
<Transform Algorithm="&base64;"/>
</Transforms>
<DigestMethod>&sha1;</DigestMethod>
<DigestValue>blahblahblahblahblahblahbla=</DigestValue>
</ObjectReference>
</Manifest>

<Document Id="X">
Iambase64encodingofaPDFdocument=
</Document>

As you can see, T_1 refers to all of M except for Location, IDRef and the
two specific transforms in T_2 that you needed to put the PDF document in X
in the first place.  Thus, if you later decide to delete those two
transforms and the IDREF, and instead to add a URL Location, you can do that
without breaking the DigestValue that was computed over (most of) M.

John Boyer
Software Development Manager
UWI.Com -- The Internet Forms Company

Received on Friday, 17 December 1999 15:08:11 UTC