W3C home > Mailing lists > Public > public-gpu@w3.org > November 2017

Re: Ill-defined programs

From: Maciej Stachowiak <mjs@apple.com>
Date: Mon, 13 Nov 2017 15:22:06 -0800
Message-id: <B45FE734-03AF-434F-8435-897BC2E6B4BF@apple.com>
Cc: Kai Ninomiya <kainino@google.com>, public-gpu <public-gpu@w3.org>
To: Dzmitry Malyshau <dmalyshau@mozilla.com>


> On Nov 13, 2017, at 2:13 PM, Dzmitry Malyshau <dmalyshau@mozilla.com> wrote:
> 
> Maciej,
> 
> Thanks for expressing your position!
> 
> Out of curiosity, does WebGL working group take the same stance on ill-defined programs (i.e. applying interoperability requirements to them)?

I'm sorry, I don't know. I have not participated in or followed WebGL the WebGL Working Group

> 
> > Yes. Strong objection. If behavior of any programs is not fully specified, then web developers will start to accidentally depend on the behavior of one browser (usually whichever is most popular), and then browsers will have to reverse-engineer each others' behavior.
> 
> This sentence touches 2 important topics:
>   1. Full specification. I don't believe we can fully specify behavior of even well-defined programs... Say, you have a number of UAV/SSBO writes in the shaders - there is no efficient way to guarantee particular ordering (hence, Unordered part) for us, and thus - any developer relying on the order produced by their dev machine may be surprised to see what other users get, depending on the backend, OS, driver, etc. If we agree here, we need to figure out where to draw the line between strong portable behavior and whatnot. My point is, excluding the ill-defined programs from the equation provides more opportunity for well-defined programs to run efficiently.

It's fine by me to specify the behavior for ill-defined programs that enables max efficiency, even if it is a behavior that seems weird in isolation. But I believe there must be a single specified behavior (other than, perhaps, resource limits or dependencies on optional extensions).

WebAssembly managed to do this, so I believe it is possible. WebAssembly even has a pretty clear definition of what they consider to be interoperable, and I'd suggest we follow it to the extent it makes sense.


>   2. Reverse-engineering each other behavior. AFAIK, this group's goals is to produce an open-source WebGPU implementation that can be shared by browser vendors, no need to reverse-engineer. I would prefer us (as a group) focusing on the standard more than implementation though, given that we have an alternative.

If the implementation has a single behavior and is used by everyone, then we may as well spec it. If even a single implementation has unpredictable behavior, that seems like a serious design problem.

> 
> >>  2. Are there any objections to having an optional debug validation layer?
> 
> > I don't know what that means so no opinion,
> 
> We touched the validation layer topic a few times, accidentally. For example, when discussing the robust buffer access, we mentioned that there can be a validation layer doing all the clamping and potentially signalling errors up (to the CPU side), but we (Mozilla) don't believe it to be necessary at all times, given that both Vulkan and D3D12 provide decent robustness out of the box. The validation layer could do much more, e.g. ensuring pipeline barriers are correct, but the exact responsibilities depend on the chosen API direction.

If websites can turn this layer on or off at their option, then it doesn't solve the interop problem. If Vulkan and D3D12 provide robustness with reliable and consistent behavior in the error case, then it seems adequate to specify that behavior. If the same program could do totally different things on Vulkan, D3D12 and Metal, or between different devices with the same API, then if WebGPU takes off we will end up in reverse engineering hell.

> 
> 
> > Security (for some appropriate definition) is a hard requirement.
> 
> Absolutely! What about ordering of the rest? (portability, performance, API simplicity)

I don't think I can define a strict ordering among the others. They are all important, but tradeoffs among them might need to be a judgment call.

> 
> 
> Kai,
> 
> >  When we say, "There is no duty to detect or report whether an application is actually ill-behaved," we're only saying that it's not required for WebGPU to be secure.
> 
> Yes, I understand. I took the discussion outside of the bounds/context of the original issue, hence putting it on the mailing list (as opposed to discussing in place).
> 
> Thank you,
> Dzmitry
> 
> 
> On Mon, Nov 13, 2017 at 4:42 PM, Kai Ninomiya <kainino@google.com <mailto:kainino@google.com>> wrote:
> Dzmitry, I want to just clarify David's intent, if I understand it correctly. His document is focused solely on the security constraints. When we say, "There is no duty to detect or report whether an application is actually ill-behaved," we're only saying that it's not required for WebGPU to be secure.
> 
> There is, of course, still difference of opinion in whether we should ultimately actually detect/normalize ill behavior.
> 
> On Mon, Nov 13, 2017 at 12:39 PM Maciej Stachowiak <mjs@apple.com <mailto:mjs@apple.com>> wrote:
> 
>> On Nov 13, 2017, at 12:27 PM, Maciej Stachowiak <mjs@apple.com <mailto:mjs@apple.com>> wrote:
>> 
>> 
>> 
>>> On Nov 13, 2017, at 11:44 AM, Dzmitry Malyshau <dmalyshau@mozilla.com <mailto:dmalyshau@mozilla.com>> wrote:
>>> 
>>> I'd like to follow-up on discussion points mentioned in https://github.com/gpuweb/gpuweb/issues/39 <https://github.com/gpuweb/gpuweb/issues/39>:
>>> 
>>> > There is no duty to detect or report whether an application is actually ill-behaved. Attempting to do so may incur overhead. For a development environment it may be desirable to detect bad behaviour, but that is not a security requirement.
>>> > We don't care about the computed results or performance of an ill-behaved application. That may be a user-experience concern, but not a security concern.
>>> 
>>> This is wider than just shaders or just security, so I'm following up on email as opposed to hijacking the GitHub thread.
>> 
>> I agree that consistent behavior for ill-behaved applications is nit a security requirement. But it is a 
> 
> Meant to say: but it is an interoperability requirement.
> 
>> 
>>> 
>>> The approach expressed by David matches the direction we (as in - Mozilla) would like to see WebGPU going: allow the user to fully (*explicitly*) specify what needs to be done, promise the portability and best performance for well-defined programs, and secure the ill-defined programs without going extra mile for making them portable or fast.
>>> 
>>> It goes somewhat across the Apple's idea that since the browser has to validate all the API calls (or, in this case, shader code/execution) for security, it can make detailed decisions on how the work is submitted to the backend API (e.g. insert pipeline barriers, allocate resource memory, clamp array indices, etc), thus turning the exposed API to have more implicit parts.
>>> 
>>> Trying to figure out the steps to make an objective (non-opinionated) call to this issue, we can start with these questions:
>>>   1. Are there any objections to lowering portability/performance guarantees for ill-defined programs?
>> 
>> Yes. Strong objection. If behavior of any programs is not fully specified, then web developers will start to accidentally depend on the behavior of one browser (usually whichever is most popular), and then browsers will have to reverse-engineer each others' behavior. This has happened so many times in the course of web standards development that it's almost a running joke. Every once in a while someone says "hey, let's just not define error handling, we only need to define the behavior for valid content" it happens. The first time was HTML, Browsers ended up reverse-engineering each other's error handling until finally they got sick of the W3C not defining this and formed the WHATWG to create HTML5, which fully specified parsing behavior for all invalid documents. CSS, JavaScript and WebAssembly also have fully interoperable behavior by spec, even in "invalid" or "error" or "ill-defined" cases.
>> 
>> Let's not make this rookie mistake. We must fully define the behavior of all programs.
>> 
>> 
>>>   2. Are there any objections to having an optional debug validation layer?
>> 
>> I don't know what that means so no opinion,
>> 
>>>   3. Can we agree that API simplicity is lower priority than security (starting with obvious), portability, and performance?
>> 
>> Security (for some appropriate definition) is a hard requirement.
>> 
>>> 
>>> Thanks,
>>> Dzmitry
> 


Received on Monday, 13 November 2017 23:22:32 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:52:22 UTC