W3C home > Mailing lists > Public > public-gpu@w3.org > May 2020

Re: [PROJECT] Compiling Machine Learning to WebGPU

From: Felix Maier <xilefmai@gmail.com>
Date: Fri, 15 May 2020 20:42:41 +0200
Message-ID: <CAL8VZWx1Gj6fP1wnhYqBNhne0S-rP=t8Vo9Lb0RHGOr_o7BO4g@mail.gmail.com>
To: Tianqi Chen <tqchen@cs.washington.edu>
Cc: Corentin Wallez <cwallez@google.com>, public-gpu <public-gpu@w3.org>
As far as I know, only Vulkan offers Tensor Core acceleration for matrices
(using VK_NV_cooperative_matrix
and is limited to NVIDIA hardware.


On Fri, 15 May 2020 at 17:59, Tianqi Chen <tqchen@cs.washington.edu> wrote:

> Thanks Corentin:
> subgroup(warp level semantics in CUDA) are useful to get the last mile on
> certain GPUs,  but not strictly necessary in many cases.
> The most crucial part that WebGPU enabled(over WebGL) is the ability to
> run compute shaders and effective use of shared memory(working-group
> memory).
> To bring the compute to the next-level(e.g. making use of TensorCore in
> nvidia GPUs when available), more extension would be needed, and we would
> certainly be curious if that is under the scope of WebGPU.
> TQ
> On Fri, May 15, 2020 at 8:37 AM Corentin Wallez <cwallez@google.com>
> wrote:
>> Hey Tianqi,
>> Thanks for sharing! The blog post was very interesting and the results
>> encouraging. I'm surprised it's this close even when WebGPU doesn't support
>> float16 or subgroup operations yet. Impressive!
>> The samples list in Implementation-Status is very out of date, there's
>> been a lot of people starting to use WebGPU lately. Group, what do you
>> think of having a "WebGPU users" wiki page to collect things beyond the
>> most trivial samples? Apache TVM would fit in there.
>> Cheers,
>> Corentin
>> On Thu, May 14, 2020 at 11:46 PM Tianqi Chen <tqchen@cs.washington.edu>
>> wrote:
>>> Hi WebGPU community:
>>> I am sending this along since I think it could be interesting to the
>>> members who are also interested in machine learning.
>>> We recently introduced support for WASM and WebGPU to the Apache TVM
>>> deep learning compiler.
>>> Our pre-liminary experiments shows that TVM’s WebGPU backend can* get
>>> close to native GPU performance* when deploying
>>> deep learning models to the web. Please see the blog here.
>>> https://tvm.apache.org/2020/05/14/compiling-machine-learning-to-webassembly-and-webgpu
>>> I am also wondering if it is possible to link to the blog as an example
>>> of ML On WebGPU in the wiki page(
>>> https://github.com/gpuweb/gpuweb/wiki/Implementation-Status).
>>> As an open source community, we certainly love feedbacks and
>>> collaborations
>>> Cheers
>>> TQ
Received on Friday, 15 May 2020 18:43:47 UTC

This archive was generated by hypermail 2.4.0 : Friday, 15 May 2020 18:43:48 UTC