Re:RE: An Xpath-based Solution

Excellent, John.  Well said.  I agree with all the points you made (although I
hadn't considered replacing IDREF, covering it with XPath makes sense.)  I'd
just like to walk through a quick history (as I understand it), hoping it helps
make the point.

Originally, signing something with a (public key) digital signature meant
running a hash (digest) on the object and encrypting it with the private key. 
The hash was not retained, just the encrypted hash,  so the object had to be
available for both signing and validating.  Dereferencing wasn't an issue, it
was a necessity.  I'll refer to this as the "primitive approach".

For security reasons, it was decided that we must sign the signature and digest
algorithms along with the object.  To accomplish this, we didn't discard the
digest.  It became data to be signed along with the algorithms, and the
signature was indirect (we signed the hash of the object, not the object).  In
theory, we could have just concatenated the algorithm names with the object and
discarded the hash which would have been like the primitive approach but with
the added security.

For simplicity, we have allowed multiple digests to be signed with the same
signature.  Now come applications (OLTP) that exploit the list of digests by
saying that dereferencing all the listed objects isn't necessary (or even
desirable) for validation (we may not want the application to be able to
dereference some of the objects for privacy reasons.)  What's worse is that
these applications are going to be required to dereference different parts at
different times in the process for validation.

Now it seems to me that some of the key members of this effort are sacrificing
the basic functionality of signing documents in order that they may sign
digests.  I think this is a short-sighted and narrow focus, it is a focus on a
rare requirement rather than the common one.

Maybe we could include a note (in the spec) to the core application designer. 
Ideally, we could specify a proposed syntax (as was done with DOM), but it is
out of our current scope.  We should advise designers that they must specify
exactly what is being signed or verified, and that they have the responsibility
of making this clear to their (human or application) clients.  If we don't
specify this, then we are liable for the ensuing misunderstandings.

Suppose we have a list of object references in SignedInfo, say objects with id
O_1, O_2, and O_3.  The application could specify exactly which objects we are
doing hash-validation on (dereferencing).  For example:

Validate(SignatureId, ObjectId);
Validate(SignatureId, ObjectIdList);
Validate(SignatureId, "all");
Validate(SignatureId, "none");

My point is that we should place stronger emphasis on the basic purpose of
digital signatures, the one satisfied by the primitive approach mentioned at the
beginning, and the one "everyone" (developers and clients) expect.  Also, I
don't believe we should sacrifice basic capability (e.g. moving documents) to
the half-sig requirement.  If a manifest is application-specific (OLTP), it
should _really_ stand out (ApplicationSpecificManifest or Type="Application
Specific").

Thanks,
Rich


____________________Reply Separator____________________
Subject:    RE: An Xpath-based Solution  
Author: "John Boyer" <jboyer@uwi.com>
Date:       12/17/99 12:05 PM

Hi Rich,

What I was saying is that minutes of the latest telecon indicated that no
syntax change would be made such that core behavior would NOT completely
validate the signature in the example I gave below.

It is not me personally but rather the WG majority that are proposing that
validation of the Manifest reference be passed off to the application.  At
one point I proposed adding a Verify="Yes" to Reference (or something like
that) to indicate that core behavior should validate the Reference. The
opposition to the idea was steep for reasons that weren't really clearly
spelled out in the telecon.  In fact, the only good argument against the
Verify attribute, stated by Don, was that a true validation (in a court case
for example) was likely to verify 'everything' signed by the signer.

At this point, it became clear to me that the real problem here is that the
default should be to verify 'everything' that the signer signed, and this
default has been hi-jacked by the desire to create optimizations for
intermediate processing steps in certain applications like IOTP.

So here's the question:  Why has it become the default that we verify less
than what the signer signed?  I ask because it seems that the current design
favors optimizations of certain applications to the detriment of other
applications that don't work at all with core verification behavior.

It seems to me that if an application wants to optimize by not always
verifying 'everything', then that is the business of the application, and
the application's designers should be put to the task of defining when and
how much verification they will do.  I realize that this may be a hard pill
to swallow because it requires looking at signatures in a different
perspective than was represented in the Brown draft (which was very greatly
influenced by protocol message signing needs like those in IOTP).

BTW, some have said that we are venturing too far into application land.
Firstly, this argument seems to be whipped out only when it is convenient.
For example, IDREF is a mistake but was added so that certain applications
could function without using an XPath transform.  So there is only sometimes
a hesitation to change the syntax, and I am not convinced that this
hesitation is being fairly applied to all scenarios.  Secondly, I think that
some are thinking too heavily about core behavior being restricted to
signing a preconstructed bucket of bits, and this seems to be forgetting
that a very large part of our mandate is to robustly sign XML.  The bucket
of bits approach assumes that a message M is preconstructed for core and
ready for hashing, whereas in signing XML, the ability to precisely define
how the message M is constructed MUST be part of the process.

To be honest, the inclusion of XPath transforms has given us a good
mechanism for flexible construction of M, except that almost every time
someone in the WG finds they need it, they try to invent alternate syntax
(like IDREF) or semantics (like automatic shaving of start and end tags, and
base-64 decoding)-- all to avoid the inevitable.

Finally, I also don't find it to be a satisfying argument that we shouldn't
make changes because vendors are already implementing this.  It is a working
draft, and that is the risk one takes when implementing a working draft.
The process of mulling through the details until we get the right solution
should NOT be encumbered by vendors who've jumped the gun.  They will have
to make changes, but they are still ahead of those who did not start at all.
Besides, as a developer I'm quite well aware of how little actual code
change is necessary for what I'm talking about (e.g. far less effort than it
took to write this email).

John Boyer
Software Development Manager
UWI.Com -- The Internet Forms Company


-----Original Message-----
From: w3c-ietf-xmldsig-request@w3.org
[mailto:w3c-ietf-xmldsig-request@w3.org]On Behalf Of
rhimes@nmcourt.fed.us
Sent: Friday, December 17, 1999 11:03 AM
To: w3c-ietf-xmldsig@w3.org
Subject: Re:An Xpath-based Solution



Thanks, John.  This was the approach I preferred.  I'm still confused as to
whether core behavior would validate the reference, or if you are proposing
that
validation (of the reference) be passed off to the application.  If it is
part
of core behavior, I don't see the difficulty in moving the document
(internal to
external, external to internal, or external to external).  Such difficulty
was
implied in the telecon.

Re your statement in the previous message:
"Basically, the core signature verifies.  If your app took the signature
apart, then presumably it can put it back together again before trying to
verify it with core behavior.  Or, your app could support manifest
validation."

The first part of this paragraph doesn't appear to refer to your proposed
approach.  I read this as saying that because we are using a manifest (in
your
procedure below), manifest support is pushed off to the application.  Thus,
core
behavior does not validate the referenced document to the hash.  If true, I
disagree, and strongly believe that validation of the referenced document
(to
the hash) should be part of core behavior in your proposed approach.  If my
concern is based on a misunderstanding, I'd appreciate a clarification.

Thanks,
Rich


____________________Reply Separator____________________
Subject:    An Xpath-based Solution
Author: <w3c-ietf-xmldsig@w3.org>
Date:       12/16/99 9:16 PM

The example below will eventually become part of the scenarios document, but
Joseph requested it be posted in today's telecon.

Let's do the complete example of your scenario, which includes the xpath you
asked about above.  You have a SignedInfo that contains an ObjectReference
O_1.  The IDREF of O_1 indicates a Manifest M.  In the Manifest, there will
be an ObjectReference O_2 whose IDREF indicates an element X, where the
character content of X is the base-64 encoded PDF document of interest.  The
transforms T_2 in O_2 include a base-64 transform *after* an Xpath of
"string(text())" (note that child:: is the default so I've taken it out of
the example). The transforms T_1 of ObjectReference O_1 (the one in
SignedInfo) must take as input the Manifest M, and yield as output (M minus
(T_2 + IDREF + Location))-- but only if T_2 is exactly as described above.
T_1 will contain the specific description of the T_2 that can be omitted,
not just a statement saying that all transforms can be thrown out.

<Signature>
<SignedInfo>
<ObjectReference IDREF="M">
<Transforms> <!-- This is T_1 -->
<Transform Algorithm="&xpath;">
descendant::node()
[
        not(self::Location and parent::ObjectReference) and
        not(self::IDREF and parent::ObjectReference) and
        not(self::Transform[@Algorithm="&base64;"]) and
        not(self::Transform[@Algorithm="&xpath;" and
text()="string(text())"])
]
</Transform>
.
</ObjectReference>
.
</SignedInfo>
.
</Signature>

<Manifest Id="M">
<ObjectReference IDREF="X">
<Transforms> <!-- This is T_2 -->
<Transform Algorithm="&xpath;">string(text())</Transform>
<Transform Algorithm="&base64;"/>
</Transforms>
<DigestMethod>&sha1;</DigestMethod>
<DigestValue>blahblahblahblahblahblahbla=</DigestValue>
</ObjectReference>
</Manifest>

<Document Id="X">
Iambase64encodingofaPDFdocument=
</Document>

As you can see, T_1 refers to all of M except for Location, IDRef and the
two specific transforms in T_2 that you needed to put the PDF document in X
in the first place.  Thus, if you later decide to delete those two
transforms and the IDREF, and instead to add a URL Location, you can do that
without breaking the DigestValue that was computed over (most of) M.

John Boyer
Software Development Manager
UWI.Com -- The Internet Forms Company






 

Received on Monday, 20 December 1999 12:31:17 UTC