W3C home > Mailing lists > Public > public-gpu@w3.org > November 2018

language spec vs. execution environment spec; security vs. interoperability

From: David Neto <dneto@google.com>
Date: Wed, 14 Nov 2018 18:05:31 -0500
Message-ID: <CAPmVsJWBujAizr85kdBnw5oye5auUtz=RM-DBTwZfSvVDKn2OQ@mail.gmail.com>
To: public-gpu <public-gpu@w3.org>
We've circled the same arguments and points a few times.  I think our
key disagreement is over terms and thus how to classify requirements.

I think we agree that:

- A _language specification_ must define what a valid program is, and
the behaviour of a valid program.

I think our disagreement is whether a language specification must
define the behaviour of an _invalid_ program.  My position is that
a language specification does not do that.  After all, an English
dictionary does not give the French meanings of French words.

It's the _execution environment_ that describes robustness and error
recovery requirements of an implementation in the face of invalid

This splits further into two cases:

The _security_ requirement is that an application must be prevented
from doing bad things in any case.
(I drove a discussion on this a year ago:
https://github.com/gpuweb/gpuweb/issues/39 )

An _interoperability_ requirement is that an application do the same
(similar) things in different execution environments.

The interesting case is when a program statically appears valid, but which
has invalid dynamic behaviour. (e.g. out of bounds on some index access.)
Because our programming models are (basically) Turing-complete, there
will always be cases that can't be fully validated statically; they
require dynamic checks.

For an invalid (badly behaving) program:

- The security requirement still holds: that a bad program must not do
bad things.

- The interoperability requirement is up for debate: Should different
implementations be forced to act similarly in the face of such a program?

I'm totally open to have that interoperability discussion, and it's
mainly about tradeoffs:  what costs do implementations impose on
valid/well-behaved programs so they can bound the behaviour of invalid
programs?  But that's about implementation _interoperability_, not

Ok, so what.  How important is interoperability of invalid programs?
An extreme position is that it's not important at all.  Think of it this
way: We don't care at all about the _performance_ of an invalid program.

But there is a downside to the extreme position: a program could be
invalid but seem to act ok on a very forgiving implementation, and
you find out late that it breaks for 1% of users on a more obscure
implementation you didn't test.  That's bad.  This happens a lot in
graphics land: It's a cliche that "works on NVIDIA" is insufficient
evidence to say that an application is written correctly.

So let's have a debate about interoperability requirements for invalid

Received on Wednesday, 14 November 2018 23:06:08 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:52:25 UTC