W3C home > Mailing lists > Public > public-fx@w3.org > October to December 2011

Re: [css-shaders] GLSL implementation defined limits

From: Vincent Hardy <vhardy@adobe.com>
Date: Sun, 13 Nov 2011 22:08:05 -0800
To: Chris Marrin <cmarrin@apple.com>
CC: "public-fx@w3.org" <public-fx@w3.org>
Message-ID: <CAE5EF56.21A50%vhardy@adobe.com>
Hi Chris,

From: Chris Marrin <cmarrin@apple.com<mailto:cmarrin@apple.com>>
Date: Fri, 11 Nov 2011 17:03:33 -0800
To: Adobe Systems <vhardy@adobe.com<mailto:vhardy@adobe.com>>
Cc: "Gregg Tavares (wrk)" <gman@google.com<mailto:gman@google.com>>, "public-fx@w3.org<mailto:public-fx@w3.org>" <public-fx@w3.org<mailto:public-fx@w3.org>>
Subject: Re: [css-shaders] GLSL implementation defined limits

On Nov 11, 2011, at 3:27 PM, Vincent Hardy wrote:

...I don't think there is any way to announce hardware capabilities. Even if we gave the author the ability to discover details such as number of texture units, uniforms, and GPU program memory, there will be hardware that seems to have all the needed capabilities but that still can't run the shader because of some combination of the features being used. I think the best we can do is to have a minimum set of functionality that authors can try to stay within to have the best chance of interoperability. Then we should figure out some way of having fallback style for when a shader does fail.
I'm not actually sure of how CSS handles fallback. Maybe we could do this:
.filtered: {
filter: shader(<my complicated shader>);
filter: shader(<my less complicated shader>);
filter: blur(3);
So if the complicated shader fails, it will try the less complicated one. If that fails, it will use a simple blur. Would that work?
Hi Chris,
Having a fallback mechanism would be good, but I am not sure about two things. First, I think that we would need to clarify what 'fails' mean. Are you thinking the complicated shader would fail because it is slow, because it is not compiling or are you thinking of another definition for failure?
CSS handles fall back for things like background-image like so:
.bkg {
background-image: url(...); /* baseline, should be supported by all implementations */
background-image: linear-gradient(...); /* will override the simple background-image specification if linear-gradient is supported */

Right, I think the problem in this case is that (as Tab pointed out) the above is a compile time function and we don't know if the shader will fail until runtime.

Your question is the right one to ask. What constitutes failure? Certainly a compile failure, or a failure when building the program (because you went over a resource limit) qualifies. But what about a shader that is "too slow"? I don't think we should codify any arbitrary rules about what is too slow. But many drivers will abort a shader if it takes too long [1]. If it weren't for that last case, we could build the shader eagerly and then at least we'd know if it was valid at @supports time. But for a shader that fails while running, I don't see what we could possibly do. We certainly don't want the failure mechanism that WebGL has, where an event comes in that the author needs to handle. That's contrary to the way CSS works.

So I think for now we should see if it's possible to deal with the case of the shader failing to build and try to give the author some way to handle that, like the @supports mechanism or something.

>> If we agree that a shader failing to compile is the problem to address, I am not sure if we actually need something else. I think it is conceptually similar to an improperly encoded image: the user agent gets the resource and at the time of processing, is not able to use it for some reason. The resource is deemed invalid. So a compilation error would be the same, in my mind, as if the shader was pointing to, say, a video, an HTML or SVG file, for example, which are all invalid shader code. I think the fail-over is the same and, as proposed in the draft, the effect is as if no shader was specified. May be what we should add to the spec. is that in the case two shaders are specified, a failure in one of them causes both to be ignored, since as you pointed out in an earlier email, shaders work hand in hand, and it typically does not make sense to keep one of the other fails.

Kind regards,

[1] In WebGL-land, this is the infamous "GPU reset" problem, where drivers timeout when they see a shader "hanging". Many drivers handle this case very poorly, sometimes even kernel panic poorly. At best, the driver will reset the entire GPU, which causes every OpenGL context on the system to lose its state! This is one of the main reasons Apple doesn't ship WebGL turned on yet. The WebGL working group has made a Context Lost mechanism where we're trying to get drivers to handle this more rigorously, notifying us when it happens, so we can send an event to the author. I don't think it's practical to do that in CSS.

Received on Monday, 14 November 2011 06:08:44 UTC

This archive was generated by hypermail 2.3.1 : Monday, 22 June 2015 03:33:46 UTC