Re: Binary vs Text

> On Nov 7, 2018, at 1:49 PM, Myles C. Maxfield <mmaxfield@apple.com> wrote:
> 
> 
> 
>> On Nov 6, 2018, at 3:55 PM, Jeff Gilbert <jgilbert@mozilla.com <mailto:jgilbert@mozilla.com>> wrote:
>> 
>> I don't think it's necessarily helpful to think of this discussion as
>> predominately binary vs text.
>> 
>> I think there is a lot of value in a constrained, targeted ingestion
>> format, *and separately* I think SPIR-V is a natural choice for this
>> ingestion format.
>> 
>> SPIR-V's core format is very, very easy to parse, and lends itself
>> well to simple but robust parsing. Lifetimes are clearly expressed,
> 
> OpLifetimeStart <https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#OpLifetimeStart> and OpLifetimeEnd <https://www.khronos.org/registry/spir-v/specs/unified1/SPIRV.html#OpLifetimeStop> are instructions in the SPIR-V language, which presumably means that lifetimes are not clearly expressed with those instructions. Even with the addition of those instructions, they can’t be trusted because they have to be validated, which means they could lie.

Oops, I meant “not clearly expressed without those instructions"

> 
>> instruction invocations are very explicit, and ecosystem support is
>> already good. It's a dream format for ingestion.
>> 
>> Binning it with other (particularly older) binary formats is just
>> inaccurate. Doing the initial parse gives you the structures
>> (functions, types, bindings) you want pretty immediately. By
>> construction, most unsafe constructs are impossible or trivially
>> validatable. (SSA, instruction requirements, unsafe types, pointers)
>> 
>> For what it's worth, text formats are technically binary formats
>> with a charset. I would rather consume a constrained,
>> rigidly-structured (SSA-like? s-expressions?) text-based assembly
>> than some binary formats I've worked with. (DER, ugh!)
>> 
>> Disentangling our ingestion format from the pressures of both
>> redundancies and elisions that are desirable in directly-authored
>> languages, simplifies things and actually prevents ambiguity. It
>> immediately frees the authoring language to change and evolve at a
>> faster rate, and tolerates more experimentation.
>> 
>> I would rather solve the compilation tool distribution use-case
>> without sacrificing simplicity and robustness in ingestion. A
>> authoring-to-ingestion language compiler in a JS library would let us
>> trivially share everything above the web-IR->host-IR translation,
>> including optimization passes.
>> On Tue, Nov 6, 2018 at 3:16 PM Ken Russell <kbr@google.com <mailto:kbr@google.com>> wrote:
>>> 
>>> Hi Myles,
>>> 
>>> Our viewpoint is based on the experience of using GLSL as WebGL's input language, and dealing with hundreds of bugs associated with parsing, validating, and passing a textual shading language through to underlying drivers.
>>> 
>>> Kai wrote this up at the beginning of the year in this Github issue: https://github.com/gpuweb/gpuweb/issues/44 <https://github.com/gpuweb/gpuweb/issues/44> , and there is a detailed bug list (which is still only a sampling of the associated bugs we fixed over the years) in this spreadsheet:
>>> https://docs.google.com/spreadsheets/d/1bjfZJcvGPI4M6Df5HC8BPQXbl847RpfsFKw6SI6_T30/edit#gid=0 <https://docs.google.com/spreadsheets/d/1bjfZJcvGPI4M6Df5HC8BPQXbl847RpfsFKw6SI6_T30/edit#gid=0>
>>> 
>>> Unlike what I said on the call, the main issues aren't really around the parsing of the input language or string handling. Both the preprocessor's and compiler's parsers in ANGLE's shader translator are autogenerated from grammars. Of more concern were situations where we had to semi-arbitrarily restrict the source language so that we wouldn't pass shaders through to the graphics driver which would crash its own shader compiler. Examples included having to restrict the "complexity" or "depth" of expression trees to avoid stack overflows in some drivers (this was added as an implementation-specific security workaround rather than to the spec), working around bugs in variable scoping and shadowing, defeating incorrect compiler optimizations, and more. Please take the time to read Kai's writeup and go through the spreadsheet.
>>> 
>>> The question will come up: would using a lower-level representation like SPIR-V for WebGPU's shaders really address these problems? I think it would. SPIR-V uses  SSA form and simple numbers for variables, which will eliminate entire classes of bugs in mishandling of language-level identifiers, variables, and scopes. SPIR-V's primitives are lower level than those in a textual shader language, and if it turns out restrictions on shaders are still needed in WebGPU's environment spec in order to work around driver bugs, they'll be easier to define more precisely against SPIR-V than source text. Using SPIR-V as WebGPU's shader ingestion format would bring other advantages, including that it's based on years of experience developing a portable binary shader representation, and has been designed in conjunction with GPU vendors across the industry.
>>> 
>>> On the conference call I didn't mean to over-generalize the topic to "binary formats vs. text formats in the browser", so apologies if I misspoke.
>>> 
>>> -Ken
>>> 
>>> 
>>> 
>>> On Mon, Nov 5, 2018 at 10:58 PM Myles C. Maxfield <mmaxfield@apple.com> wrote:
>>>> 
>>>> Hi!
>>>> 
>>>> When we were discussing WebGPU today, the issue of binary vs text was raised. We are confused at the viewpoint that binary languages on the Web are inherently safer and more portable than text ones. All of our browsers accept HTML, CSS, JavaScript, binary image formats, binary font files, GLSL, and WebAssembly, and so we don’t understand how our teams came to opposite conclusions given similar circumstances.
>>>> 
>>>> Can you describe the reasons for this viewpoint (as specifically as possible, preferably)? We’d like to better understand the reasoning.
>>>> 
>>>> Thanks,
>>>> Myles
>> 
> 

Received on Wednesday, 7 November 2018 22:00:08 UTC