- From: <noah_mendelsohn@us.ibm.com>
- Date: Thu, 12 Jun 2003 11:27:25 -0400
- To: Rich Salz <rsalz@datapower.com>
- Cc: "xml-dist-app@w3.org" <xml-dist-app@w3.org>
Rich Salz writes:
>> In another posting, Gudge asked by signing times.
>> In most cases (i.e., assuming no tricky XSLT
>> transforms involved), the cost of signing is
>> ka+b, where a is the length of the data, and b
>> is fixed based on the number of bits in the key.
Right. And though not being a crypto expert, my impression is that
constant b can in many cases be significant or dominate in lots of
practical scenarios (though of course, there is always an a>b/k for which
the first term dominates.) What I think is missing in Gudge's analysis i
the potential conversion time to base 64, the associated lack of locality
in memory access etc. So, implemented in the naive way in which the
entire "binary buffer" is first converted to characters, then signed, I
would assume:
Let's assume the original binary buffer was of length lbin, then the
performance of signing that without conversion would be:
k*lbin+b
applying Rich's formula. The cost of converting and signing as chars
would be:
c*lbin+k*(lbin*1.3)+b = (c+1.3*k)lbin+b
where c is the conversion time for a byte of binary into 1.3 bytes of
characters. Depending on how it's actually done, the memory usage might
go up from lbin to 2.3 lbin, possibly affecting locality, cache hit
ratios, etc. for large buffers. Of course, a careful implementation would
pipeline the conversions and only have a modest number of bytes in
character form at a time.
Do I have this right?
BTW (off topic): Rich, I really liked your article at
http://webservices.xml.com/pub/a/ws/2003/06/10/salz.html .
------------------------------------------------------------------
Noah Mendelsohn Voice: 1-617-693-4036
IBM Corporation Fax: 1-617-693-8676
One Rogers Street
Cambridge, MA 02142
------------------------------------------------------------------
Received on Thursday, 12 June 2003 11:28:47 UTC