Re: Some Feature requests.

On Tue, Aug 6, 2019, at 10:35 PM, Myles C. Maxfield wrote:
> We’ve heard this defeatist argument before and entirely disagree with it.

My point is that security designs by non-experts have a poor track record.
I think fingerprinting security for WebGPU should be designed by a web browser fingerprinting security expert, and I also think that you are trying to fix the problem at the wrong level of abstraction. The security should be provided at a level above the WebGPU API.

> > I wouldn't be surprised if you could fingerprint a GPU simply by testing edge conditions in the output of transcendental functions in a shader, or by using some other technique that it is impossible for this group to protect against.
> 
> Luckily, the shader compiler is inside the browser, so if 
> transcendental functions are the problem, we can fix them.

I think this means that WebGPU will guarantee identical results for floating point operations across all platforms. I don't understand how that would work. Basic arithmetic operations like + and * are not associative, so the actual results depends on the order in which numbers are added or multiplied. Shader compilers will aggressively rearrange arithmetic expressions to get the fastest speed, and will use various strategies to compute subexpressions in parallel, depending on the underlying hardware. I don't see how you can control the order of evaluation. Hypothetically, you could generate pessimized code that forces arithmetic operations to be serialized, but the performance cost would be great.

WebGPU will need to guarantee full IEEE floating point semantics on all platforms. Vulkan, at least, does not require full IEEE semantics, so that might be challenging to implement. Eg, Vulkan does not require Inf and NaN to be supported. WebGPU will specify implementations for all math functions, overriding the implementations provided by the GPU driver. Is that the intent?

With OpenGL, I am not seeing consistent behaviour in the shading language across GPUs. For example, the 'max' function behaves differently for -inf inputs on my nVidia GTX 1050 than it does on other platforms that I've tested. I expect 'sin' to have different behaviour on every platform, because fast table driven approximations are used, and every implementation is different. It was weird to see 'max' misbehaving; you'd think it would be easy to get right. But I guess WebGPU will fix this by providing its own implementation of 'max'?

Received on Wednesday, 7 August 2019 04:07:18 UTC