Re: Streaming - [Re: CryptoOperation and its life cycle]

On Fri, Dec 14, 2012 at 6:31 AM, Mark Watson <watsonm@netflix.com> wrote:
>
>
> Sent from my iPhone
>
> On Dec 14, 2012, at 6:00 AM, "Wan-Teh Chang" <wtc@google.com> wrote:
>
>> On Fri, Dec 14, 2012 at 3:29 AM, Aymeric Vitte <vitteaymeric@gmail.com> wrote:
>>>
>>> I am not talking about a partial hash output.
>>>
>>> To be clear, the question is how to do what is here
>>> https://github.com/Ayms/node-Tor/blob/master/src/crypto.cc#l396-416 ( what
>>> is commented was the initial behavior, ie close the hash after digest, I did
>>> modify it to keep the state before digest and process it again after digest)
>>> or here https://gitweb.torproject.org/tor.git/blob/HEAD:/src/common/crypto.c
>>> (lines 1578-1587, same thing)
>>
>> What Aymeric Vitte requested is the ability to fork a digest operation
>> so that we can finish one branch of the fork to obtain the digest of
>> the data up to that point.
>>
>> This is used in the CertificateVerify handshake message of the SSL/TLS
>> protocol, so most native crypto libraries have this function. This
>> issue was discussed before. Digest is the only operation I know of
>> that has uses cases for this fork/copy/clone feature.
>
> I can see why something additional is needed to do this with actual streams - but we don't have those yet in the API.

Streaming (via Streams or Blobs) is orthogonal to Aymeric's request.
For that, there is ISSUE-18 (
http://www.w3.org/2012/webcrypto/track/issues/18 )

Multi-part operations are also orthogonal to Aymeric's request,
because there is no intermediate result for multi-part digesting. For
multi-part encryption/decryption, the API already provides for
incrementally receiving data (via .process()), and having the UA
incrementally report the availability of data (via progress events).
As explained to Aymeric, there is no 1:1 mapping between "data in" and
a progress notification being fired - but the API does provide and
explain how such functionality could work.

>
> With simple ArrayBuffers, what's the problem with calculating separately the digest over A and A|B ? ( | = concatenation). Is it just to optimize so that we don't do the work for A twice ?

Yes. That is the use case. Until you finalize a digest, you can clone
the state and re-use it.

For HMAC, for example, the use case was given that you can compute the
IPAD/OPAD expansion of the key, and then clone that state, as a way to
reduce the number of computations you have to do.

The same would apply for storing the intermediate state of digest(A),
before it had been finalized (by appending the padding and length).
That way, the implementation does not have to recompute digest(A`)
when computing digest(A|B).

This is a valid use case, no question, but it's a potentially
problematic one, hence why it has not *yet* been addressed in the
Editor's Draft, and exists as ISSUE-22 -
http://www.w3.org/2012/webcrypto/track/issues/22

Received on Friday, 14 December 2012 19:01:35 UTC