W3C home > Mailing lists > Public > public-webcrypto@w3.org > November 2012

Re: On optionality

From: Mark Watson <watsonm@netflix.com>
Date: Wed, 28 Nov 2012 04:42:47 +0000
To: Ryan Sleevi <sleevi@google.com>
CC: "public-webcrypto@w3.org Group" <public-webcrypto@w3.org>
Message-ID: <F97EB7CB-8576-4FB0-B994-88AC412B19F0@netflix.com>

On Nov 27, 2012, at 7:44 PM, Ryan Sleevi wrote:

> On Tue, Nov 27, 2012 at 6:29 PM, Mark Watson <watsonm@netflix.com> wrote:
>> On Nov 27, 2012, at 5:50 PM, Ryan Sleevi wrote:
>>> On Tue, Nov 27, 2012 at 4:23 PM, Mark Watson <watsonm@netflix.com> wrote:
>>>> All,
>>>> This came up a lot in the recent discussion, but we did not have time to discuss it on the call yesterday.
>>>> All optional features are a pain for application developers, but I think we need to distinguish between two types of optionality:
>>>> (a) features that are just optional for UAs to implement
>>>> (b) features which are not present on all devices
>>>> It is in our power to reduce or eliminate features of type (a) from our specification, but the fact that different devices have different capabilities is just a fact of life over which we have limited influence. It would be ridiculous, for example, for us to say that because *some* devices support 3D video, *all* must support it. With respect to such features, there are three possibilities:
>>>> (i) don't provide support for any such features in the specification
>>>> (ii) require that where device support is absent, the UA must provide fall-back support in software
>>>> (iii) the feature is optional in the specification (with some kind of capability discovery, although this could just be the possibility to return NOT SUPPORTED to some operation).
>>>> (i) is a rather draconian least-common-denominator approach that would make our gadget lives rather dull (IMO).
>>>> (ii) should be preferred wherever possible.
>>>> But there remain some cases where (ii) is not possible, and then we have (iii). (ii) is not possible where the feature relies on hardware support (by definition). Whether a sensible software fall-back exists needs to be examined on a case-by-case basis, but sometimes there will be nothing of value that can be done without the hardware capabilities.
>>> And in the realm of DAP, in the realm of WHATWG, and in the realm of
>>> WEBAPPS, there is a common pattern for this:
>>> navigator.foo
>>> This allows trivial detection of a feature:
>>> var foo = navigator.foo;
>>> if (foo == undefined) {
>>> // not supported
>>> }
>>> A reduced example of this can be seen in [GAMEPAD], which should be a
>>> minimal spec for perusing.
>>> In some cases with WHATWG (and to a lesser degree, whatever HTMLWG has
>>> copied from the Living Standard in WHATWG), this may show up with
>>> explicit methods for gaining a specific context or determining support
>>> for it.
>>> For example, from [CANVAS]
>>> var foo = canvas.getContext('webgl', ...);
>>> // var foo now contains a WebGLRenderingContext
>>>> The model of optionality we have in our specification now is Algorithms. It's wrong to characterize this as "void *dovoid(void*)". What we have is several instances of "<explicit type> do<explicit operation>( void *, <other explicit parameters> )". Yes, the first argument is completely open (The AlgorithmIdentifier, containing AlgorithmParameters), but what the method does, the return type and the input message data are explicitly defined. And the situation is the same for all our operations.
>>> No, the API contract proposed **is** "Any DoSomething(Any, ...)" -
>>> simply relying on textual descriptions of possible values for "any".
>>> This is the exact same, from a programmers perspective, as "void*
>>> dosomething(void*, void*)". The fact that a cast occurs within the
>>> function itself is irrelevant from the perspective of any developer
>>> who is using this API. They cannot infer the expected types,
>>> arguments, or parameters. They cannot have any reliable guarantee on
>>> the error mode when things fail. Nor can they even proactively
>>> determine if something is supported, not implemented, broken,
>>> whatever.
>> Hmm, so you are saying that because the function name says "encrypt" and the normative text says that the operation to be performed is encryption, the fact that the AlgorithmParameters are completely open and that the result has "any" type means you could abuse this method for anything at all.
>> This seems to apply to all our methods, though I am not sure it is serious.
> That is absolutely a concern, and absolutely serious. It's
> demonstrably already happened with the proposal to overload
> Import/Export for something that's not at all importing key material.
>>> When we talk about device specific features, the end-user/developer
>>> needs to be at the forefront. They need to be able to easily detect if
>>> a feature is available. Whether this is
>>> "canvas.supportsContext('webgl')" or
>>> "navigator.{gamepad,battery,device}", it provides concrete objects and
>>> interfaces that can be used by developers, with strong guarantees
>>> enforced not by prose, but by the interface itself.
>> Ok, so if KeyStorage is no longer used for keys created with the WebCrypto API (they are stored in IndexedDB) and only for pre-provisioned keys, moving the keys attribute from Crypto to navigator would address this issue, right ?
> On the merits of similarity to the aforementioned specifications and
> approaches, navigator.keys / navigator.device.keys is absolutely
> comparable. Which has been the whole point of saying that the core
> spec defines a "Key object", and that it can be re-used by other
> specifications to return Key objects as however appropriate for their
> use case, and as the WG is so interested.
>>>> This might not be the best model. Indeed we have an ISSUE to be clearer about algorithm parameters vs. operation inputs, which if solved could lock things down ink a cleaner fashion.
>>> I strongly believe the two issues are orthogonal, as expressed before.
>>> The needs for algorithms and the needs for key discovery, while
>>> similar conceptually similar, do not require nor necessarily benefit
>>> from shared paradigms.
>> Well, obviously: "require" and "*necessarily* benefit" are both rather strong conditions. But that doesn't mean that following the same paradigm isn't a reasonable thing to do.
>>> As one specific example, key discovery is dependent on the UA the the
>>> parameters, whereas any cryptographic operation, it's dependent on the
>>> UA, the parameters, and the key object itself (and its
>>> implementation). It is this latter fact that resulted in the proposed
>>> API, which itself absolutely needs improvement and/or reworking.
>> This seems like a minor difference to me.
> Fair enough. I suspect we'll continue disagreeing on this.
>>>> For the issue of retrieving pre-existing keys, there is certainly a connection with hardware. And there is diversity in what is supported by devices, just as there is diversity in algorithm support. And there is no imperative to support everything day one (just as with algorithms). So I find it hard to understand why the existing model for optionality shouldn't apply here too.
>>> As shown in DAP, as shown in WHATWG, as shown in WEBAPPS, the approach
>>> for such optionality has been accomplished by separate specifications.
>> There's a difference between essential independent blocks of functionality and the much more integrated situation we have here. We are talking about finding Key objects which are specified in our main draft. There are a variety of types of Key object that the UA/device supports, they may differ in the algorithms they support, in how they are created and in how they are discovered.
> No, there's not really. Both [WEB-INTENTS] and [WEBGL] demonstrate
> that splitting into multiple specs, even if "conceptually" similar, is
> fine.
> Alternatively, you can look at a much baser concept, such as "files
> and streams", and see that there too, multiple specs exist that
> cooperatively extend functionality.
> For example, you can see this exact approach with the [FILEAPI], then
> looking at APIs that build on that such as [FILE-SYSTEM] or
> [FILE-WRITER], or through the [STREAMS] API extension that builds on
> [FILEAPI] http://www.w3.org/TR/FileAPI/
> [FILE-SYSTEM] http://www.w3.org/TR/file-system-api/
> [FILE-WRITER] http://www.w3.org/TR/file-writer-api/
> [STREAMS] http://dvcs.w3.org/hg/streams-api/raw-file/tip/Overview.htm
>>> Consider, for example, the Core Device interface [CORE-DEVICE], which
>>> is then extended by various specs [MESSAGING-API]. Or consider Web
>>> Intents, which defines a core API but *does not* specify what or how
>>> the intent should behave. Instead, that's left for specs like
>>> As you can see, the web platform already has a number of ways to deal
>>> with this, and we arguably should not go create yet another one.
>> We have no choice with respect to algorithms.
> I disagree.
>>> I agree with your sentiment - we do not want further optionality.
>>> However, as you can see by the numerous examples I've provided,
>>> optionality has been approached by saying that implementers MUST
>>> implement every part of the interface, and defining that interface in
>>> an independent specification (GAMEPAD, CORE-DEVICE, MESSAGING-API). If
>>> a U-A doesn't implement the spec, obviously they don't implement the
>>> interface either, and developers can easily discover this.
>> I'm not opposed to separate specifications, but I'm having trouble understanding the criteria for drawing the line. Do you want a separate spec for each crypto algorithm ? Or should we define "families" of algorithms and have one spec for each "family". Is discovery of all kinds of pre-existing key a single capability, or are pre-provisioned origin-specific keys separate from keys on smart-cards ? (We certainly can't require devices that support one to support the other). Coarser-grained optionality is certainly better than fine-grained. When the grain is course enough to justify a separate spec, that is obviously the way to go.
> We discussed this when discussing the security considerations of
> algorithms, and with proposals to ontologically organize things. One
> proposal was to try and look at things in terms of "Authenticated
> encryption", "digital signatures", "hashing", etc. Another proposal
> was to look at algorithms and various modes - for example, PKCS#1
> But as you can see with algorithms like SEED or GOST, it may make more
> sense to put those as separate documents themselves. This has
> certainly worked will for NIST, worked well for BSI, and worked well
> for the IETF.
> Our current approach is most similar to the IRTF work of "List all the
> algorithms at once". Great. That's just one approach - and not
> necessarily (arguably?) the right one.
> "Key discovery" is nearly identical to the concerns of both [FILEAPI]
> and [STREAMS], and it is not at all unpredecented to be extended -
> consider, for example, how [MEDIASOURCE] does just that, extending
> createObjectUrl for MediaSource objects, even though all of those, or
> [MEDIASTREAM] extending createObjectUrl to take MediaStream objects.
> That is - the core spec defines the notational object representation,
> and other specs can describe ways of getting such objects - or
> extending the concepts introduced in the spec for particular needs.
> We've identified possible key sources during previous discussions
> (particularly when discussing scope and out of scope) as:
>  * Some form of named smart card/secure element (Gemalto)
>  * Based on certificate criteria (Gemalto, Korean use case, desktop
> browsers for sysapps)
>  * Kerberos key stores
>  * **named** pre-provisioned keys
>  * **unnamed** pre-provisioned keys
>  * TLS keying material
>  * JOSE/DOMCrypt?
> [MEDIASOURCE] http://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html
> [MEDIASTREAM] http://www.w3.org/TR/mediacapture-streams/
>>> In the few cases where extensible APIs are provided (CANVAS,
>>> WEB-INTENTS), the primary spec merely notes the normative behaviour of
>>> the extensible API, and it's up to some other spec to define how that
>>> extensible API behaves [WEBGL, PICK-MEDIA, CONTACTS].
>>> Note that even in the case of something like [CANVAS], where the
>>> context is an extensible string identifier (arguably similar in
>>> concept, but not at all in implementation, to AlgorithmIdentifier),
>>> there still exists the fundamental means to query support, with as
>>> limited information as possible, from a user agent [CANVAS-SUPPORTS]
>> I certainly agree that straightforward capability discovery is good.
>>> This is even true for APIs with double discoverability (CANVAS ->
>>> WEBGL, WEBGL -> WEBGL-EXTENSION), which exposes a DOM-friendly
>>> discovery API (getSupportedExtensions), which uses a separate registry
>>> [WEBGL-REGISTRY], and which uses concrete interface types with
>>> submethods for extension (eg: [DEBUG-SHADERS]).
>>> As such, as stand by my objections to the previous proposal from Netflix:
>>> * The core spec (as to be implemented by all user agents) does not
>>> need to and should not reference features that are not universally
>>> implementable, beyond acknowledging they may exist. Arguably, this
>>> argument could (and eventually will have to) extend to Algorithms as
>>> well.
>> Yep, I don't see how you avoid that extension if that is your logic. It would be good to see a concrete proposal for how you would like to address that. If the group agrees then I'd be happy make my proposal align with that.
> Sure. I've already provided numerous examples of how this has been
> addressed within other specifications. I've made the specific argument
> on why I believe key discovery should follow this, and provided
> numerous examples to show how this has been approached. We should
> focus on that, in particular, and can revisit algorithms following.

I'd prefer to do things consistently (for many of the same reasons you've been giving).

You've raised a number of concerns with the way algorithms are handled in our specification today. Could you make an outline proposal for how you would like to address those, so we can see if there is support ? Then we can apply that approach consistently. 

>>> * Extensibility of the DOM has historically been accomplished by
>>> providing concrete interfaces with concrete methods, easily queried by
>>> developers (to see if a user agent supports it), with as strict a type
>>> enforcement as possible (eg: methods with concrete signatures)
>> We could obviously ensure that the key discovery method (asynchronously) returns a Key object, if that helps. For the input params we have the same problem as algorithms.
> I'm not sure I follow this at all. Your reply suggests that you may
> not have had the opportunity yet to review the specs I linked.
> Asynchronicity was not at all what I was referring to when I talked
> about extensibility of the DOM.

You complained above that the proposed method for key access did not have a 'concrete' signature, in that the return type could (effectively) by 'any' and the parameters too (in the form of AlgorithmIdentifier). I'm pointing out that the return type can easily be addressed by making it concretely a Key object and that the openness of the AlgorithmIdentifier is a problem shared by all the methods (and so deserved a common solution). (I only used the word asynchronous for correctness, since the key discovery method does not return the result, it returns an object which will later receive the result in an event).

I do understand the other specifications and the point you are making. But it is mainly a wider point not specific to my proposal. I'm happy to modify my proposal to align with whatever solution the group adopts to these wider problems. 

> Consider [DEBUG-SHADERS] as an example (or a number of other related
> extensions in [WEBGL-REGISTRY], or even the [CANVAS] -> [WEBGL]
> context relationship.
> In particular, the use of the interface object allows you to group
> concepts (such as 'pre-provisioned keys') and expose multiple methods
> for diverse ways of obtaining pre-provisioned keys. Whether these are
> synchronous or asynchronous are up to whatever the needs of your use
> case and implementations are. However and by whatever your users query
> are again up to you. However, it provides clear, discoverable,
> enumerable means to decide what is available, rather than playing
> "stab in the dark".
> We're talking about UA support for "method X". There's no craziness
> needed. Either the UA supports querying by method X or it doesn't. The
> actual query USING method X may need to be asynchronous, may need
> extra parameters, etc - but that's for whatever interface implements
> "method X".

Ok, I'll look deeper at those specifications and see if a similar approach would work for this problem.

>>> * APIs which are specific to a particular device class or category
>>> traditionally been incorporated within secondary specifications and
>>> semi-independent timelines.
>>> My broader objections regarding "generic" key discovery are this.
>>> There are many needs for key discovery beyond the pre-provisioned ID
>>> case (for example, certificate based discovery). If we're going to
>>> propose a generic interface, then we MUST follow the "rule of three"
>>> of software design - discussing *at least* three discovery methods to
>>> make sure our API is actually going to be suitable for all them and
>>> behave in an expected way - for the web platform and for developers.
>> Are we applying this rule consistently ? Clearly it's desirable, but at some point we are contribution driven.
> Within algorithms? This was core to proposing what became the WD - and
> a strong motivator for the diversity of algorithms, including ones
> that were not originally noted in discussions (eg: PBKDF2).
> As far as "contribution driven", it's reasonable to expect concerns
> and objections to be raised without requiring a counter-proposal.
> While contributions for proposing new features do require someone to
> drive them, I don't think it's the burden of any member who disagrees
> with a proposal to be obligated to provide something better. Instead,
> they should provide the reasoning for why it's a concern - and that
> has been done here and repeatedly previously.

What I am saying is that if there are only two discovery methods for which there are proponents willing to do the implementation work, this should not prevent an API for those going into the specification.

Likewise, if there were only two algorithms of some particular class, and they had proponents and implementors, they should not be held up because there is not a third.

>>> This is why I speced out so many algorithms - to try and enumerate the
>>> many different types and requirements (for example, GCM's
>>> authentication tag, PBKDF2's password input, DH's key agreement, HMAC
>>> verification), so that we can actually compare and contrast the API
>>> and its semantics. And these were "simple" exactly because it's
>>> universally agreed upon what the "RSA" algorithm means - which is not
>>> true at all for key discovery.
>> At least for the case we proposed (origin-specific named keys), its as simple as access to a set of ( name, key ) pairs. I described it in detail in my proposal, but I could add more detail.
> I'm aware that for your case, it's simple, And for the sample list
> just off the top of my head from discussions in this group, it's clear
> that many more use cases exist. You have proposed the addition of an
> intended-to-be-generic API, with only one implementation - the
> 'trivial' case. I certainly cannot comment on whether the trivial case
> is enough to meet your needs - you certainly feel so, which is great -
> when coupled with a generic API, it's absolutely reasonable to
> question "How generic is it" and "Will this work for X". If the answer
> to that question relies on this WG engaging in work so far pushed to
> secondary features (cert discovery and multiple key stores, notably)
> or potentially out of scope (kerberos?), then it's absolutely
> reasonable to push back on that, which I have been and continue to do.
> Hopefully, combined with all of the aforementioned specs, the path
> that has been clearly blazed by other WGs dealing with these same
> problems is clear. We don't need a "void* dovoid(void*)" API to
> accomplish that - you have two major themes available.
>> Anyway, as I said on the call, I'd be happy to move KeyStorage to a new specification, with the intent that it be worked on (hopefully replaced with something substantially improved) there by those with interest, provided that was on the basis that people interested in that could make progress without encountering objections to the work per se. I think to execute that move we need to create a draft first, so we can see and discuss the exact scope of the new specification before moving material from the main one.
>> ůMark
> As I've said numerous times previously, consensus to remove it was
> previously met, recorded,

No. This is not the case. You are and were well aware of my objections at the time. I made it clear to the group in Lyon. I discussed it with you personally and I clarified the situation with the chair. Removal of pre-provisioned keys from the main specification (effectively what you are proposing) was not discussed at the meeting. This is absolutely no way to conduct a standardization process and will be a serious problem going forward if this capability is unilaterally removed.

> and it has been removed.

Where ? In the "Latest Editors Draft " it is here: http://www.w3.org/2012/webcrypto/WebCryptoAPI/#KeyStorage-interface.

Is there some secret draft the group does not know about (other than your private copy?)

> There is an open
> ISSUE for it. Just like functionality that was not addressed with
> other open ISSUEs, we continue to work through them and search for
> consensus. There has not been consensus yet.

There was consensus to publish the FPWD and it included this capability. There's been no agreement to remove it.

> We thus continue.

Well, I would very much like to do that. To talk about the technical issues and make progress on the specification, rather than having to argue about whether a capability that was previously agreed should even be worked on at all.

> I have added the boiler plate of "THIS IS A DRAFT AND SUBJECT TO
> CHANGE AT ANY TIME", which has been long understood by very virtue of
> being "Working Draft", to hopefully avoid any such confusion. Until we
> began approaching Last Call - and really, not until we go through CR
> to PR - are there any stability requirements for the API. It may be
> that we get to CR, start implementing, realize it's unimplementable,
> and kick it back to WD to be **completely rewritten**. That's the
> nature of the W3C Process, and is all spelled out in the process
> documentation.

I'm aware of the process. And it involves making changes by consensus. If Google's position on material in the FPWD has changed since then, you need to get agreement. Getting agreement to something else (enabling storage of keys in IndexedDB) doesn't mean you can just remove a different major feature (discovery of pre-provisioned keys) that you don't like.

> As far as objections to pursuing as a separate spec, there is no
> objection to the process and publication of such work. Concerns about
> the actual contents will be deferred if and until there is such a
> spec, but a number of concerns, alternatives, and solutions are
> contained in this thread (and threads prior) to hopefully inform any
> such effort.
> I look forward to seeing what is proposed.

I think it's also incumbent on you to make a proposal to address the many problems you're raising with the rest of the specification. Or are these only problems when applied to the pre-provisioned key feature ?

Received on Wednesday, 28 November 2012 04:43:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:14 UTC