W3C home > Mailing lists > Public > public-fx@w3.org > October to December 2011

Re: [css-shaders] GLSL implementation defined limits

From: Chris Marrin <cmarrin@apple.com>
Date: Fri, 11 Nov 2011 17:03:33 -0800
Cc: "Gregg Tavares (wrk)" <gman@google.com>, "public-fx@w3.org" <public-fx@w3.org>
Message-id: <63B0A9D3-1463-48B9-B08C-728A0DAECAD4@apple.com>
To: Vincent Hardy <vhardy@adobe.com>

On Nov 11, 2011, at 3:27 PM, Vincent Hardy wrote:

>> ...I don't think there is any way to announce hardware capabilities. Even if we gave the author the ability to discover details such as number of texture units, uniforms, and GPU program memory, there will be hardware that seems to have all the needed capabilities but that still can't run the shader because of some combination of the features being used. I think the best we can do is to have a minimum set of functionality that authors can try to stay within to have the best chance of interoperability. Then we should figure out some way of having fallback style for when a shader does fail.
>> I'm not actually sure of how CSS handles fallback. Maybe we could do this:
>> 	.filtered: {
>> 		filter: shader(<my complicated shader>);
>> 		filter: shader(<my less complicated shader>);
>> 		filter: blur(3);
>> 	}
>> So if the complicated shader fails, it will try the less complicated one. If that fails, it will use a simple blur. Would that work?
> Hi Chris,
> Having a fallback mechanism would be good, but I am not sure about two things. First, I think that we would need to clarify what 'fails' mean. Are you thinking the complicated shader would fail because it is slow, because it is not compiling or are you thinking of another definition for failure?
> CSS handles fall back for things like background-image like so:
> .bkg {
> 	background-image: url(...); /* baseline, should be supported by all implementations */
> 	background-image: linear-gradient(...); /* will override the simple background-image specification if linear-gradient is supported */
> }

Right, I think the problem in this case is that (as Tab pointed out) the above is a compile time function and we don't know if the shader will fail until runtime.

Your question is the right one to ask. What constitutes failure? Certainly a compile failure, or a failure when building the program (because you went over a resource limit) qualifies. But what about a shader that is "too slow"? I don't think we should codify any arbitrary rules about what is too slow. But many drivers will abort a shader if it takes too long [1]. If it weren't for that last case, we could build the shader eagerly and then at least we'd know if it was valid at @supports time. But for a shader that fails while running, I don't see what we could possibly do. We certainly don't want the failure mechanism that WebGL has, where an event comes in that the author needs to handle. That's contrary to the way CSS works.

So I think for now we should see if it's possible to deal with the case of the shader failing to build and try to give the author some way to handle that, like the @supports mechanism or something.

[1] In WebGL-land, this is the infamous "GPU reset" problem, where drivers timeout when they see a shader "hanging". Many drivers handle this case very poorly, sometimes even kernel panic poorly. At best, the driver will reset the entire GPU, which causes every OpenGL context on the system to lose its state! This is one of the main reasons Apple doesn't ship WebGL turned on yet. The WebGL working group has made a Context Lost mechanism where we're trying to get drivers to handle this more rigorously, notifying us when it happens, so we can send an event to the author. I don't think it's practical to do that in CSS.

Received on Saturday, 12 November 2011 01:04:19 UTC

This archive was generated by hypermail 2.3.1 : Monday, 22 June 2015 03:33:46 UTC