RE: attribute-value-selector-004.xht not well formed

> The testcase is however not about testing whether the cascade is done
> correctly. You seem to conflate the two. (At least as far as your
> deficient implementation is concerned.)


You're absolutely right but it cuts both ways: if this test case should not be about testing the cascade then why should it be about testing error handling ?

> I don't see why it requires invalid markup or any additional markup
> other
> than <p> really. [1badattr] will be dropped regardless of what the
> markup
> specifies.


We're not verifying whether p is selected or not. We're verifying that [1badAttr] does not select anything. It's not about checking whether the rule is dropped or why. This is a Chapter 5 test, not Chapter 4. Regardless, testing p alone is still not sufficient in order to assert the *entire* proposed rule was dropped since it groups two selectors. Both of them should be checked and I don't see how that could be done without an invalid attribute in the markup. But as Arron reminded me, that is not relevant here since we're not testing CSS parsing and error handling here. Only selection.

> I don't see how that changes the scenario. The mere presence of
> attributes
> does not influence the parsing of CSS.

Nobody said they did. But nobody said this test was about validating CSS parsing either. The test case wants to show that [1badAttr] does not select anything. No more, no less.

You want to do it by selecting a p element in a way that causes [1badAttr] to be ignored if and only if the UA implements CSS error handling correctly. But we do not need such a dependency since error handling is tested elsewhere. There is no need to add assumptions about proper error handling, or even correct support of selector grouping e.g. my deficient code might toss the rule because it thinks "p,[1badAttr]" is the IDENT and believes the comma to be the problem. This testcase simply intends to verify that [1badAttr] is not applied and nothing else. (The 'nothing else' may be key in order to understand Arron's intent, I think).

These test cases make as few assumptions as possible wrt the number of features a UA needs to implement correctly in order to pass them. The fewer the assumptions, the lower the odds of false positives on any given test (imo). 

Received on Tuesday, 10 March 2009 23:45:41 UTC