Re: Some Feature requests.

> On 8 Aug 2019, at 06:49, Kai Ninomiya <kainino@google.com> wrote:
> 
> Just want to jump in and say that, in our experience, for GPU compute even more so than graphics, the optimal code can be wildly different on different GPU chipsets/architectures, and it's even more important that apps reach peak performance. We should expect that frameworks (like TensorFlow.js) will need to know which of several architecture-specific hand-tuned implementations of each performance-critical operation (like matmul or conv2d) to choose from for best performance.

> This doesn't necessarily preclude us from hiding this information behind some user action (like a permission prompt, or PWA installed app), but sites that can't get that permission may resort to brute force tests as mentioned by others.

It's sad to consider but it could also end up where writing GPU compute code that performs acceptably on all hardware requires exposing so much personally identifiable information, either directly or indirectly, that the privacy concerns outweigh the benefit of exposing the API to begin with.

But it's not uniquely privacy that can trigger such a decision. We've excluded features because they won't perform well on some hardware, even in the case where you can identify it.

The extreme end of the permission prompt is allowing the Web site to run whatever native code it wants. I realise that users have a broad range of assumptions about privacy on the Web, so it is hard know what to design for. It's a mythical place of universally acceptable features, performance, security and privacy, all in balance.

Dean

Received on Wednesday, 7 August 2019 21:09:48 UTC