W3C home > Mailing lists > Public > public-gpu@w3.org > November 2018

Re: Binary vs Text

From: Filip Pizlo <fpizlo@apple.com>
Date: Wed, 07 Nov 2018 17:42:14 -0500
Cc: "Myles C. Maxfield" <mmaxfield@apple.com>, Jeff Gilbert <jgilbert@mozilla.com>, "Russell, Kenneth" <kbr@google.com>, public-gpu <public-gpu@w3.org>
Message-id: <7F0FAB67-5236-4F6F-ACDD-5DE1D307262B@apple.com>
To: Kai Ninomiya <kainino@google.com>

> On Nov 7, 2018, at 5:15 PM, Kai Ninomiya <kainino@google.com> wrote:
> > OpLifetimeStart and OpLifetimeEnd are instructions in the SPIR-V language, which presumably means that lifetimes are not clearly expressed with those instructions. Even with the addition of those instructions, they can’t be trusted because they have to be validated, which means they could lie.
> According to your links, OpLifetimeStart/OpLifetimeEnd are only valid with Kernel capability (i.e. OpenCL). I would guess this is related to physical pointers.
> > Things like plumbing bounds around with other objects would require rewriting functions and variables and operations on those variables. It would require generating new SSA IDs or possibly regenerating / reassigning them
> Generating and reassigning SSA IDs is extremely simple compared with non-SSA IDs. This is why SSA is used in modern compilers to begin with.

Before SSA, folks used IRs with numbered temporaries like 3AC. The thing that SSA brings to the table is that it makes it easy to find the definition of a variable given its use. That’s why compilers use it. If all they wanted was an easy way to generate IDs then it’s just as easy to do without SSA as with SSA.

That said, I think both of you guys have a point:

- It’s true that editing SPIR-V to insert checks will mean that you’re not simply passing a SPIR-V blob through. You’re going to have to decode it to an SSA object graph and then encode that graph back to a blob. 

- It’s true that SPIR-V’s use of 32-bit variable IDs makes generating new ones straightforward.

But I should note that since WebHLSL is not a higher order language, generating new variable names is pretty easy. Any name not already used is appropriate, which isn’t significantly different from finding a spare 32-but variable ID. 

> > The evidence from WebAssembly vs JavaScript suggests this probably won’t be true (if by “easier” you mean either “faster” or “simpler to code correctly”).
> It sounds like you are claiming that the JavaScript parser/code generator is not more complex than the WASM parser/code generator. Is this correct? Can you provide evidence for this claim?

Depends on what you mean by complexity. And it depends on a lot of things that are not really inherent to the languages. And it depends on whether you account for the handicap in JS due to JS being a more complex language in ways that have nothing to do with binary versus text. 

Without a doubt, parsing JavaScript is de facto more code than parsing WebAssembly. This happens mostly because those parsers have been hyper optimized over a long time (decade or more in some cases, like the one in JSC). Maybe it’s also more code to parse JS even if you didn’t do those optimizations, but I’m not sure we have an easy way of knowing just by looking at an existing JS parser or wasm parser. 

What is sure is that JavaScript has better startup time than WebAssembly. See: https://pspdfkit.com/blog/2018/a-real-world-webassembly-benchmark/

So if “complexity” is about time then I don’t think that WebAssembly wins. 

Looks like this varies by browser and it also looks like cases where one language is faster to start than the other have more to do with the compiler backend than parsing.

If by complexity you mean bugs, then WebAssembly parsing has bugs as does JS. JS parsing has less bugs for us, but that may have to do more with JS being very mature. It may also be because parsing text is easier to get right.

If by complexity you mean amount of code or difficulty of code after the parser but before the backend, then it’s unclear. WebAssembly and JavaScript both have some quirks that implementations have to deal with before emitting code to the backend. JSC does weird stuff to JS before emitting bytecode and it has significant complexity in how it interprets wasm to produce B3 IR. Also, WebAssembly opted against SSA - it’s more of an AST serialization disguised as a stack-based bytecode than SSA. I think wasm opted for that because dealing with something AST-like as an input was thought to be easier than dealing with SSA as an input. 


> On Wed, Nov 7, 2018 at 2:11 PM Myles C. Maxfield <mmaxfield@apple.com> wrote:
>>> On Nov 6, 2018, at 3:55 PM, Jeff Gilbert <jgilbert@mozilla.com> wrote:
>>> I don't think it's necessarily helpful to think of this discussion as
>>> predominately binary vs text.
>>> I think there is a lot of value in a constrained, targeted ingestion
>>> format, *and separately* I think SPIR-V is a natural choice for this
>>> ingestion format.
>>> SPIR-V's core format is very, very easy to parse,
>> SPIR-V is a sequence of 32-bit words, so you’re right that it’s easy to read a sequence of 32-bit words.
>> However, a Web browser’s job is to understand any possible sequence of inputs. What should a browser do when it encounters two OpEntryPoint instructions that happen to have the same name but different execution models? What happens when an ArrayStride decoration is set to 17 bytes? What happens when both SpecId and BuiltIn decorations are applied to the same value? SPIR-V today is clearly not a dream for ingestion. It is more difficult for a browser to understand a SPIR-V program than a WHLSL program.
>>> and lends itself
>>> well to simple but robust parsing. Lifetimes are clearly expressed,
>>> instruction invocations are very explicit, and ecosystem support is
>>> already good. It's a dream format for ingestion.
>>> Binning it with other (particularly older) binary formats is just
>>> inaccurate. Doing the initial parse gives you the structures
>>> (functions, types, bindings) you want pretty immediately. By
>>> construction, most unsafe constructs are impossible or trivially
>>> validatable. (SSA, instruction requirements, unsafe types, pointers)
>>> For what it's worth, text formats are technically binary formats
>>> with a charset. I would rather consume a constrained,
>>> rigidly-structured (SSA-like? s-expressions?) text-based assembly
>>> than some binary formats I've worked with. (DER, ugh!)
>>> Disentangling our ingestion format from the pressures of both
>>> redundancies and elisions that are desirable in directly-authored
>>> languages, simplifies things and actually prevents ambiguity. It
>>> immediately frees the authoring language to change and evolve at a
>>> faster rate, and tolerates more experimentation.
>>> I would rather solve the compilation tool distribution use-case
>>> without sacrificing simplicity and robustness in ingestion. A
>>> authoring-to-ingestion language compiler in a JS library would let us
>>> trivially share everything above the web-IR->host-IR translation,
>>> including optimization passes.
>>>> On Tue, Nov 6, 2018 at 3:16 PM Ken Russell <kbr@google.com> wrote:
>>>> Hi Myles,
>>>> Our viewpoint is based on the experience of using GLSL as WebGL's input language, and dealing with hundreds of bugs associated with parsing, validating, and passing a textual shading language through to underlying drivers.
>>>> Kai wrote this up at the beginning of the year in this Github issue: https://github.com/gpuweb/gpuweb/issues/44 , and there is a detailed bug list (which is still only a sampling of the associated bugs we fixed over the years) in this spreadsheet:
>>>> https://docs.google.com/spreadsheets/d/1bjfZJcvGPI4M6Df5HC8BPQXbl847RpfsFKw6SI6_T30/edit#gid=0
>>>> Unlike what I said on the call, the main issues aren't really around the parsing of the input language or string handling. Both the preprocessor's and compiler's parsers in ANGLE's shader translator are autogenerated from grammars. Of more concern were situations where we had to semi-arbitrarily restrict the source language so that we wouldn't pass shaders through to the graphics driver which would crash its own shader compiler. Examples included having to restrict the "complexity" or "depth" of expression trees to avoid stack overflows in some drivers (this was added as an implementation-specific security workaround rather than to the spec), working around bugs in variable scoping and shadowing, defeating incorrect compiler optimizations, and more. Please take the time to read Kai's writeup and go through the spreadsheet.
>>>> The question will come up: would using a lower-level representation like SPIR-V for WebGPU's shaders really address these problems? I think it would. SPIR-V uses  SSA form and simple numbers for variables, which will eliminate entire classes of bugs in mishandling of language-level identifiers, variables, and scopes. SPIR-V's primitives are lower level than those in a textual shader language, and if it turns out restrictions on shaders are still needed in WebGPU's environment spec in order to work around driver bugs, they'll be easier to define more precisely against SPIR-V than source text. Using SPIR-V as WebGPU's shader ingestion format would bring other advantages, including that it's based on years of experience developing a portable binary shader representation, and has been designed in conjunction with GPU vendors across the industry.
>>>> On the conference call I didn't mean to over-generalize the topic to "binary formats vs. text formats in the browser", so apologies if I misspoke.
>>>> -Ken
>>>>> On Mon, Nov 5, 2018 at 10:58 PM Myles C. Maxfield <mmaxfield@apple.com> wrote:
>>>>> Hi!
>>>>> When we were discussing WebGPU today, the issue of binary vs text was raised. We are confused at the viewpoint that binary languages on the Web are inherently safer and more portable than text ones. All of our browsers accept HTML, CSS, JavaScript, binary image formats, binary font files, GLSL, and WebAssembly, and so we don’t understand how our teams came to opposite conclusions given similar circumstances.
>>>>> Can you describe the reasons for this viewpoint (as specifically as possible, preferably)? We’d like to better understand the reasoning.
>>>>> Thanks,
>>>>> Myles

Received on Wednesday, 7 November 2018 22:42:47 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:52:25 UTC