- From: Ben Constable <bencon@microsoft.com>
- Date: Mon, 11 Sep 2017 22:25:10 +0000
- To: Dzmitry Malyshau <dmalyshau@mozilla.com>, Corentin Wallez <cwallez@google.com>
- CC: public-gpu <public-gpu@w3.org>
- Message-ID: <CY4PR21MB01523AF65A8624D9EEFFDAA6CA680@CY4PR21MB0152.namprd21.prod.outlook.com>
With regard to your test with strip cut index, I think it is dangerous to rely on a non-specced behavior like that. Are we wanting to allow multiple index types in the MVP (both 16 and 32)? I realize that there are bandwidth limitations etc driving people’s decisions here, but I am skeptical that this design point is something that gives us lock-in if we just choose 32 bit for the time being. I agree that including the vertex attributes in the pipeline layout is something that I don’t want to innovate on. Researching the potential side-effects of this feels like it is not worth the time for the value we get. It is likely that the APIs will resist us doing this. If I am understanding Dzmitry right, then I agree with him about sample counts. I agree about not deriving property about blending too. I also agree that depth test enable should be explicit. From: Dzmitry Malyshau [mailto:dmalyshau@mozilla.com] Sent: Monday, September 11, 2017 8:19 AM To: Corentin Wallez <cwallez@google.com> Cc: public-gpu <public-gpu@w3.org> Subject: Re: Pipeline objects open questions Hi Corentin, > How do we take advantage of the pipeline caching present in D3D12 and Vulkan? Do we expose it to the application or is it done magically in the WebGPU implementation? I think we should expose the "pipeline derivative" logic that translates to D3D12 cached PSO blob nicely, but not the `VkPipelineCache` (which we can handle internally, for MVP/now at least). > Should the type of the indices be set in RenderPipelineDescriptor? If not, how is the D3D12 IBStripCutValue chosen? I made a test (https://github.com/gfx-rs/gfx/pull/1486<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgfx-rs%2Fgfx%2Fpull%2F1486&data=02%7C01%7Cbencon%40microsoft.com%7C7bde7715f7aa4ef825b508d4f92891e6%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636407400019977624&sdata=cTTfRhcNNGe97j1JpwL15A99ANVTciVoxNBm8kEZc8Y%3D&reserved=0>) and found out that D3D12 treats strip cut value 0xFFFFFFFF as 0xFFFF for u16 index buffers. Thus, I propose to not require the user to specify the index type and always use "0xFFFFFFFF" strip cut value in the d3d12 backend. > Should the vertex attributes somehow be included in the PipelineLayout so vertex buffers are treated as other resources and changed in bulk with them? I don't think we should try to innovate here as opposed to just providing what D3D12/Vulkan/Metal have (and they don't have vertex attributes in the layout/signature). > Does the sample count of the pipeline state come from the RenderPass too? There are multiple kinds of sample counts. There is a sample count of an image, defining the actual storage properties. The framebuffer (containing images) then implicitly carries that property (sample count of the storage). There is a sample count of the rasterizer, defining the fixed function state, and like all the other fixed function state it should be in the pipeline descriptor/state. So my answer would be "no". > Should enablement of independent attachment blend state be explicit like in D3D12 or explicit? I don't think it's worth deriving this property from the blend function. We should just follow Vulkan/D3D12 here and be explicit. Given the Web API, we may have a null-able blend descriptor, where `null` would be the lack of blending. > Should alpha to coverage be part of the multisample state or the blend state? Multisample would be more appropriate, I believe, since the coverage makes sense without blending. > Should “depth test enable” be implicit or explicit? My answer would be the same as for the blending - explicit. Thank you! Hopefully, this makes sense :) -Dzmitry On Fri, Sep 8, 2017 at 3:24 PM, Corentin Wallez <cwallez@google.com<mailto:cwallez@google.com>> wrote: Hey all, While what goes into pipeline objects is mostly clear (see this doc<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgpuweb%2Fgpuweb%2Fblob%2Fmaster%2Fdesign%2FPipelines.md&data=02%7C01%7Cbencon%40microsoft.com%7C7bde7715f7aa4ef825b508d4f92891e6%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636407400019977624&sdata=0HB3P0iBSlC03mBgB0XIW9tEBfp6PJceX33105hzjYs%3D&reserved=0>), there is still a bunch of open questions: * How do we take advantage of the pipeline caching present in D3D12 and Vulkan? Do we expose it to the application or is it done magically in the WebGPU implementation? * Should the type of the indices be set in RenderPipelineDescriptor? If not, how is the D3D12 IBStripCutValue chosen? * Should the vertex attributes somehow be included in the PipelineLayout so vertex buffers are treated as other resources and changed in bulk with them? * Does the sample count of the pipeline state come from the RenderPass too? * Should enablement of independent attachment blend state be explicit like in D3D12 or explicit? Should alpha to coverage be part of the multisample state or the blend state? * About Vulkan’s VkPipelineDepthStencilStateCreateInfo::depthBoundTestEnable and D3D12's D3D12_DEPTH_STENCIL_DESC1::DepthBoundsTestEnable? Should “depth test enable” be implicit or explicit? What do you all think about these? Corentin
Received on Monday, 11 September 2017 22:25:37 UTC