Re: Ill-defined programs

> On Nov 13, 2017, at 11:44 AM, Dzmitry Malyshau <dmalyshau@mozilla.com> wrote:
> 
> I'd like to follow-up on discussion points mentioned in https://github.com/gpuweb/gpuweb/issues/39 <https://github.com/gpuweb/gpuweb/issues/39>:
> 
> > There is no duty to detect or report whether an application is actually ill-behaved. Attempting to do so may incur overhead. For a development environment it may be desirable to detect bad behaviour, but that is not a security requirement.
> > We don't care about the computed results or performance of an ill-behaved application. That may be a user-experience concern, but not a security concern.
> 
> This is wider than just shaders or just security, so I'm following up on email as opposed to hijacking the GitHub thread.

I agree that consistent behavior for ill-behaved applications is nit a security requirement. But it is a 

> 
> The approach expressed by David matches the direction we (as in - Mozilla) would like to see WebGPU going: allow the user to fully (*explicitly*) specify what needs to be done, promise the portability and best performance for well-defined programs, and secure the ill-defined programs without going extra mile for making them portable or fast.
> 
> It goes somewhat across the Apple's idea that since the browser has to validate all the API calls (or, in this case, shader code/execution) for security, it can make detailed decisions on how the work is submitted to the backend API (e.g. insert pipeline barriers, allocate resource memory, clamp array indices, etc), thus turning the exposed API to have more implicit parts.
> 
> Trying to figure out the steps to make an objective (non-opinionated) call to this issue, we can start with these questions:
>   1. Are there any objections to lowering portability/performance guarantees for ill-defined programs?

Yes. Strong objection. If behavior of any programs is not fully specified, then web developers will start to accidentally depend on the behavior of one browser (usually whichever is most popular), and then browsers will have to reverse-engineer each others' behavior. This has happened so many times in the course of web standards development that it's almost a running joke. Every once in a while someone says "hey, let's just not define error handling, we only need to define the behavior for valid content" it happens. The first time was HTML, Browsers ended up reverse-engineering each other's error handling until finally they got sick of the W3C not defining this and formed the WHATWG to create HTML5, which fully specified parsing behavior for all invalid documents. CSS, JavaScript and WebAssembly also have fully interoperable behavior by spec, even in "invalid" or "error" or "ill-defined" cases.

Let's not make this rookie mistake. We must fully define the behavior of all programs.


>   2. Are there any objections to having an optional debug validation layer?

I don't know what that means so no opinion,

>   3. Can we agree that API simplicity is lower priority than security (starting with obvious), portability, and performance?

Security (for some appropriate definition) is a hard requirement.

> 
> Thanks,
> Dzmitry
> 

Received on Monday, 13 November 2017 20:27:39 UTC