W3C home > Mailing lists > Public > public-fx@w3.org > October to December 2011

Re: [css-shaders] GLSL implementation defined limits

From: Gregg Tavares (wrk) <gman@google.com>
Date: Sun, 13 Nov 2011 22:55:54 -0800
Message-ID: <CAKZ+BNpPcZmWY=K+kQkHNioUvXOb+RfgCEaEdSRPq-Hdgvc2SA@mail.gmail.com>
To: Vincent Hardy <vhardy@adobe.com>
Cc: Chris Marrin <cmarrin@apple.com>, "public-fx@w3.org" <public-fx@w3.org>
On Sun, Nov 13, 2011 at 10:08 PM, Vincent Hardy <vhardy@adobe.com> wrote:

> Hi Chris,
> From: Chris Marrin <cmarrin@apple.com>
> Date: Fri, 11 Nov 2011 17:03:33 -0800
> To: Adobe Systems <vhardy@adobe.com>
> Cc: "Gregg Tavares (wrk)" <gman@google.com>, "public-fx@w3.org" <
> public-fx@w3.org>
> Subject: Re: [css-shaders] GLSL implementation defined limits
> On Nov 11, 2011, at 3:27 PM, Vincent Hardy wrote:
> ...I don't think there is any way to announce hardware capabilities. Even
> if we gave the author the ability to discover details such as number of
> texture units, uniforms, and GPU program memory, there will be hardware
> that seems to have all the needed capabilities but that still can't run the
> shader because of some combination of the features being used. I think the
> best we can do is to have a minimum set of functionality that authors can
> try to stay within to have the best chance of interoperability. Then we
> should figure out some way of having fallback style for when a shader does
> fail.
>  I'm not actually sure of how CSS handles fallback. Maybe we could do this:
>  .filtered: {
>  filter: shader(<my complicated shader>);
>  filter: shader(<my less complicated shader>);
>  filter: blur(3);
>  }
>  So if the complicated shader fails, it will try the less complicated one.
> If that fails, it will use a simple blur. Would that work?
>  Hi Chris,
>  Having a fallback mechanism would be good, but I am not sure about two
> things. First, I think that we would need to clarify what 'fails' mean. Are
> you thinking the complicated shader would fail because it is slow, because
> it is not compiling or are you thinking of another definition for failure?
>  CSS handles fall back for things like background-image like so:
>  .bkg {
>  background-image: url(...); /* baseline, should be supported by all
> implementations */
>  background-image: linear-gradient(...); /* will override the simple
> background-image specification if linear-gradient is supported */
> }
> Right, I think the problem in this case is that (as Tab pointed out) the
> above is a compile time function and we don't know if the shader will fail
> until runtime.
> Your question is the right one to ask. What constitutes failure? Certainly
> a compile failure, or a failure when building the program (because you went
> over a resource limit) qualifies. But what about a shader that is "too
> slow"? I don't think we should codify any arbitrary rules about what is too
> slow. But many drivers will abort a shader if it takes too long [1]. If it
> weren't for that last case, we could build the shader eagerly and then at
> least we'd know if it was valid at @supports time. But for a shader that
> fails while running, I don't see what we could possibly do. We certainly
> don't want the failure mechanism that WebGL has, where an event comes in
> that the author needs to handle. That's contrary to the way CSS works.
> So I think for now we should see if it's possible to deal with the case of
> the shader failing to build and try to give the author some way to handle
> that, like the @supports mechanism or something.
> >> If we agree that a shader failing to compile is the problem to address,
> I am not sure if we actually need something else. I think it is
> conceptually similar to an improperly encoded image: the user agent gets
> the resource and at the time of processing, is not able to use it for some
> reason. The resource is deemed invalid. So a compilation error would be the
> same, in my mind, as if the shader was pointing to, say, a video, an HTML
> or SVG file, for example, which are all invalid shader code. I think the
> fail-over is the same and, as proposed in the draft, the effect is as if no
> shader was specified. May be what we should add to the spec. is that in the
> case two shaders are specified, a failure in one of them causes both to be
> ignored, since as you pointed out in an earlier email, shaders work hand in
> hand, and it typically does not make sense to keep one of the other fails.
That case we are talking above is when the shader IS valid. It just fails
on low-end hardware. That's the point we are trying to address. It's easy
to write a valid shader the runs on high-end hardware but fails on low-end
hardware. You have no way to know that except to try them.

So, If a developer makes a page that uses 3 valid shaders and on some
hardware out there 1 of them fails because of that hardware's limitations,
how is the developer supposed to deal with this? It's especially an issue
if the effect the developer is trying to achieve requires that all effects
3 work together. The developer needs a way to either check that all 3
worked, or a way to say check if any one of them failed they should all
fail so the page goes back to its non-shader fallback for all 3 effects.

> Kind regards,
> Vincent
> [1] In WebGL-land, this is the infamous "GPU reset" problem, where drivers
> timeout when they see a shader "hanging". Many drivers handle this case
> very poorly, sometimes even kernel panic poorly. At best, the driver will
> reset the entire GPU, which causes every OpenGL context on the system to
> lose its state! This is one of the main reasons Apple doesn't ship WebGL
> turned on yet. The WebGL working group has made a Context Lost mechanism
> where we're trying to get drivers to handle this more rigorously, notifying
> us when it happens, so we can send an event to the author. I don't think
> it's practical to do that in CSS.
> -----
> ~Chris
> cmarrin@apple.com
Received on Monday, 14 November 2011 06:56:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:49:39 UTC