Re: Pipeline objects open questions

Hi Corentin,

> How do we take advantage of the pipeline caching present in D3D12 and
Vulkan? Do we expose it to the application or is it done magically in the
WebGPU implementation?

I think we should expose the "pipeline derivative" logic that translates to
D3D12 cached PSO blob nicely, but not the `VkPipelineCache` (which we can
handle internally, for MVP/now at least).

> Should the type of the indices be set in RenderPipelineDescriptor? If
not, how is the D3D12 IBStripCutValue chosen?

I made a test (https://github.com/gfx-rs/gfx/pull/1486) and found out that
D3D12 treats strip cut value 0xFFFFFFFF as 0xFFFF for u16 index buffers.
Thus, I propose to not require the user to specify the index type and
always use "0xFFFFFFFF" strip cut value in the d3d12 backend.

> Should the vertex attributes somehow be included in the PipelineLayout so
vertex buffers are treated as other resources and changed in bulk with them?

I don't think we should try to innovate here as opposed to just providing
what D3D12/Vulkan/Metal have (and they don't have vertex attributes in the
layout/signature).

> Does the sample count of the pipeline state come from the RenderPass too?

There are multiple kinds of sample counts. There is a sample count of an
image, defining the actual storage properties. The framebuffer (containing
images) then implicitly carries that property (sample count of the storage).
There is a sample count of the rasterizer, defining the fixed function
state, and like all the other fixed function state it should be in the
pipeline descriptor/state. So my answer would be "no".

> Should enablement of independent attachment blend state be explicit like
in D3D12 or explicit?

I don't think it's worth deriving this property from the blend function. We
should just follow Vulkan/D3D12 here and be explicit. Given the Web API, we
may have a null-able blend descriptor, where `null` would be the lack of
blending.

> Should alpha to coverage be part of the multisample state or the blend
state?

Multisample would be more appropriate, I believe, since the coverage makes
sense without blending.

> Should “depth test enable” be implicit or explicit?

My answer would be the same as for the blending - explicit.

Thank you! Hopefully, this makes sense :)
-Dzmitry


On Fri, Sep 8, 2017 at 3:24 PM, Corentin Wallez <cwallez@google.com> wrote:

> Hey all,
>
> While what goes into pipeline objects is mostly clear (see this doc
> <https://github.com/gpuweb/gpuweb/blob/master/design/Pipelines.md>),
> there is still a bunch of open questions:
>
>    - How do we take advantage of the pipeline caching present in D3D12
>    and Vulkan? Do we expose it to the application or is it done magically in
>    the WebGPU implementation?
>    - Should the type of the indices be set in RenderPipelineDescriptor?
>    If not, how is the D3D12 IBStripCutValue chosen?
>    - Should the vertex attributes somehow be included in the
>    PipelineLayout so vertex buffers are treated as other resources and changed
>    in bulk with them?
>    - Does the sample count of the pipeline state come from the RenderPass
>    too?
>    - Should enablement of independent attachment blend state be explicit
>    like in D3D12 or explicit? Should alpha to coverage be part of the
>    multisample state or the blend state?
>    - About Vulkan’s VkPipelineDepthStencilStateCreateInfo::depthBoundTestEnable
>    and D3D12's D3D12_DEPTH_STENCIL_DESC1::DepthBoundsTestEnable? Should
>    “depth test enable” be implicit or explicit?
>
> What do you all think about these?
>
> Corentin
>

Received on Monday, 11 September 2017 15:19:43 UTC