Re: ISSUE-1: Mandatory algorithms (was Re: ISSUE-3: Algorithm discovery)

On Tue, Jul 10, 2012 at 1:16 PM, Harry Halpin <hhalpin@w3.org> wrote:
> On 07/10/2012 10:04 PM, Ryan Sleevi wrote:
>>
>> On Tue, Jul 10, 2012 at 12:51 PM, Seetharama Rao Durbha
>> <S.Durbha@cablelabs.com> wrote:
>>>
>>> I was not arguing against defining the exceptions. It is definitely a
>>> required feature, we need to define them. I agree that algorithm
>>> discovery
>>> is needed.
>>>
>>> I was just commenting on the reference to ISSUE-1 - mandatory list (sorry
>>> for the confusion).
>>>
>>> So, with respect to ISSUE-1…
>>>
>>>>> When there (and there eventually will) exist two different sets of
>>>>> MUST-IMPLEMENT, how will the web application behave then? When SHA-1 is
>>>>> broken, or SHA-3 is the new MUST-IMPLEMENT, how will that be addressed?
>>>
>>> I am not sure how there can be two (or more) MUST-IMPLEMENT sets. The
>>> whole
>>> point of a standard is avoiding confusion.
>>> On the second sentence, "when SHA-1 is broken…", I would like to see it
>>> 'deprecated', rather than 'removed' from MUST-IMPLEMENT, for a period of
>>> time. In my work, I see that there are hard reasons why people need
>>> 'legacy'
>>> support. When people do decide they are ready to migrate to SHA-N,
>>> catching
>>> the exception makes sense – because, they would have put in place a
>>> migration strategy for moving from SHA-1 to SHA-N, and that strategy is
>>> what
>>> goes into the catch block. I do not see a reason for any catch block
>>> logic
>>> from day 1. If we do not provide mandatory algorithms, what do we expect
>>> people to put in the catch block?
>>
>> Every time we modify the standard, there will be N versions of the
>> standard, because there will be user agents that implement version 1,
>> version 2, version 3, etc. Web applications that wish to work with
>> these user agents must be prepared for each user agent having a
>> different view of what "MUST-IMPLEMENT" means, which is why I don't
>> think there's any particular added value in MUST-IMPLEMENT.
>>
>> Just like web applications today cannot assume that all user agents
>> accessing their site support CSS4 selectors, web applications will
>> have to be prepared for user agents that may (no longer / not yet)
>> support the desired algorithm.
>>
>> And while I agree that it would be nice for a period of "deprecation",
>> I would rather not have the matter of determining when to go from
>> "deprecated" to "removed" be decided by committee. Different browser
>> vendors have different views on security, and they have different
>> value-tradeoffs: Some U-As may be focused on a particular market
>> segment where no old feature can be removed, while other U-As may be
>> focused on market segments where security is the most important
>> aspect.
>>
>> Having the standard dictate MUST-IMPLEMENT means that U-As will lose
>> the flexibility to make independent security choices while still being
>> a compliant implementation. For example, a U-A may decide to go from
>> implemented -> disabled by default, and require a user to explicitly
>> enable it before it's available. Under a MUST-IMPLEMENT scenario, this
>> would be non-compliant behaviour.
>>
>> Aside from theory, I think as an implementor, if we had security
>> concerns with an algorithm that we believed put our users at risk,
>> then regardless of the any MUST-IMPLEMENT language in the spec, we'd
>> move to disable it to protect our users. And I suspect other browsers
>> would do the same. So that's why I think any MUST-IMPLEMENT language
>> is non-binding.
>>
>> "Recommended" algorithms are both fine and a good thing, and I don't
>> think there will be much of any debate about adding new algorithms to
>> recommended - but must-implement feels like a reach.
>>
>> Cheers,
>> Ryan
>>
>>>
>>>
>>> On 7/10/12 11:43 AM, "Ryan Sleevi" <sleevi@google.com> wrote:
>>>
>>>
>>>
>>> On Tue, Jul 10, 2012 at 10:22 AM, Seetharama Rao Durbha
>>> <S.Durbha@cablelabs.com> wrote:
>>>>
>>>> On 7/9/12 7:26 PM, "Ryan Sleevi" <sleevi@google.com> wrote:
>>>>
>>>>>> Note that none of the above semantics would necessarily be altered by
>>>>>> a
>>>>>> MUST-IMPLEMENT registry (ISSUE-1), since there would still
>>>>>> need to be some form of error handling for invalid constants/strings
>>>>>> and
>>>>>> for unsupported key+algorithm+operation tuples.
>>>>
>>>> The difference is whether the developer needs to 'explicitly' catch
>>>> UnsupportedAlgorithmException and 'do something about it', or 'just not
>>>> bother', as the algorithm they picked is guaranteed to be available.
>>>>
>>>> Errors caused by invalid constants/strings must be caught at development
>>>> time.
>>>>
>>>>
>>> I do not believe this is a reasonable approach, nor does it seem to be
>>> encouraged by the recommendations for W3C standard web APIs. [1] [2] [3].
>
>
> I think that in general, we should have some subset (the JOSE subset seems
> the obvious and ideal candidate) as a SHOULD implement. If all we have is a
> discovery algorithm, then I can not see how we will create test-cases that
> are meanginful and that Web developers can rely on. We need to be able to
> say, for a given browser X, it supports this functionality as embodied in
> test-cases. Now, if a browser *only* throws errors, then obviously that is
> useless, but we don't want that technically passing the test-cases. We want
> to say that's non-conforming.
>
> On the same-hand, I can see real value in having some generic extensible
> framework, of which I see this discovery mechanism as one way of
> approaching. I'm wondering if there any other alternative approaches?

Harry,

I'm not sure I understand why this is required. For example, how are
test cases for the <video> tag covered, or <object>, or <img> or any
of the other hyper-media tags?

For example, I'm not sure why we cannot detach the "API specification"
(these are the state machines, these are the error handling routines)
from "Algorithm specification" (this is how RSA-OAEP behaves, this is
how AES-GCM behaves).

Test cases for the API specification can focus on the objects having
the correct types / methods, the exception types existing, and any
user interaction.

For algorithm specifications, it can test individual algorithm handling.

However, for error handling, it seems like some tests will not be able
to be programatically simulating by a test suite, and must be
manually/synthetically simulated. For example, how might you test a
system failure between .processData() and .complete(), to ensure that
onerror is raised appropriately.

Beyond ensuring IDL conformance, I would think all tests can belong to
the algorithms - that is, IF a user agent implements RSA, here's tests
1-15, to ensure it implements the "correct" form of RSA. IF a user
agent does not, it automatically passes that test suite/that suite is
not-applicable.

I was also hoping you could explain the statement: " Now, if a browser
*only* throws errors, then obviously that is useless, but we don't
want that technically passing the test-cases. We want to say that's
non-conforming."

Why?

What if a U-A wanted to only implement custom algorithms (for example,
using Netflix's user authentication use case). If their U-A wanted to
only implement those algorithms, why is that a bad thing? Yes, it
means their U-A is not compatible with sites X, Y, and Z that expect
RSA, but isn't that already true if I use <img> with webm or apng in
various UAs, if I use mp4 audio with <audio> elements, or if I include
<a href=""> links with some-custom-scheme:// ?

>
>>>
>>> My comment was reflecting a the need to have the error handling state
>>> machine fully defined in the spec. Failing to specify what happens when
>>> an
>>> invalid constant/string is provided means the API is incomplete, and
>>> developers have no way of knowing what will happen. Will the U-A just
>>> crash?
>>> Will the API ever call the onerror callback? If not, what happens if the
>>> user keeps calling processData() and supplying more data? Will it cause a
>>> syntax error that causes all JavaScript to fail executing on the page?
>>>
>>> When there (and there eventually will) exist two different sets of
>>> MUST-IMPLEMENT, how will the web application behave then? When SHA-1 is
>>> broken, or SHA-3 is the new MUST-IMPLEMENT, how will that be addressed?
>>>
>>> If we update the specification, and say "SHA-1 is no longer
>>> MUST-IMPLEMENT
>>> because it was broken", what does that mean for web applications that
>>> were
>>> using SHA-1? What will their execution environments be like? Will it
>>> break
>>> all script on that page?
>>>
>>> For interop testing and for reference implementations, I do not believe
>>> we
>>> will be able to escape the need to specify error handling. Which is why I
>>> believe that these concerns remain wholly independent of the discussion
>>> of
>>> MUST-IMPLEMENT question. Because we MUST have error handling, we
>>> implicitly
>>> have discovery. The only question is whether we want to use error
>>> handling
>>> as the /only/ form of discovery.
>
>
> We will need to implement error handling to go to the next stage of W3C
> process after Draft, BTW, as otherwise we won't have consistent test-cases.
> That could be 'not bother' I assume, but I could also see a good case for
> giving more informative error messages. Whether or not that requires
> discovery is still I think a bit up in the air, I think there may be some
> cross-wired as regards what discovery means.
>
>>>
>>> [1]
>>> http://lists.w3.org/Archives/Public/public-device-apis/2011Nov/0058.html
>>> [2]
>>>
>>> http://scriptlib-cg.github.com/api-design-cookbook/#don-t-use-numerical-constants
>>> [3]
>>>
>>> http://www.w3.org/2001/tag/doc/privacy-by-design-in-apis#privacy-enhancing-api-patterns
>>>
>
>

Received on Tuesday, 10 July 2012 20:36:40 UTC