Re: Subtests that apply to HTTP header fields [was: Re: mobileOK validation logic - jar file?]

Hi Francois,

I think it would be good to discuss this further. I will be attending  
the last day of the next F2F meeting in London. What about having 30  
minutes to an hour or so on the last day to discuss this? or is the  
agenda already full? or would this be relevant topic to the next F2F  
meeting?

Regards,
Yeliz.
On 4 Mar 2009, at 11:18, Francois Daoust wrote:

> Hi Jo,
>
> Well, that's a good point.
>
> I do not think a WG note on the topic is really needed, nor do I  
> think that we would be redefining mobileOK by adding an option to  
> apply tests on files: by default, the Checker library would still  
> reject URIs with a "file" scheme and only return tests and outcomes  
> defined in the spec. The important definitions are the tests  
> themselves and the conditions for a page to be mobileOK, and these  
> would not change.
>
> That being said, the library is indeed supposed to be a reference  
> implementation of the mobileOK Basic Tests specification, and as  
> such, should not contain stuff that is not defined in the spec  
> (e.g. although anyone can add new tests, the library must stick to  
> the list of tests defined in the spec). The library was designed  
> with extensibility in mind, but this change requires the  
> introduction of a new test outcome and cannot be introduced as a  
> mere separate plug-in to the library.
>
> But then again, it looks a bit of a pity to have to duplicate the  
> code just for that. Other ideas?
>
> I think we should discuss this within the main body of the working  
> group and get the group's approval/refusal. What about it?
>
> [In the meantime, thanks for not committing any change, Yeliz...]
>
> Francois.
>
>
>
> Jo Rabin wrote:
>> I'm sorry to pick up on this so belatedly. I Generally agree with  
>> the thrust of the thread but given the specific wording in mobilOK  
>> Basic Tests - e.g. 1.2 Applicability, 2.2 Validity of the Tests  
>> and 2.3 Testing outcomes - whatever is being discussed here is not  
>> mobileOK and we know (and love) it.
>> If there is scope for off-line testing then I think it might be  
>> useful for a WG note on the topic.
>> Jo
>> Yeliz Yesilada wrote:
>>> Hi Francois,
>>>
>>> Thanks for the clarification. I also think the first approach is  
>>> better. If everybody agrees with this, we will go for that approach.
>>>
>>> Yeliz.
>>> On 16 Feb 2009, at 08:47, Francois Daoust wrote:
>>>
>>>> Yeliz Yesilada wrote:
>>>>> Hi Francois,
>>>>> I am sorry but I am a bit confused about what you are  
>>>>> suggesting :(
>>>>
>>>> Yes, I see what you mean, I was mostly thinking aloud, but  
>>>> didn't make any choice ;)
>>>>
>>>> I think I convinced myself that the first approach below is both  
>>>> the most satisfactory and the easiest to implement. So I'd go  
>>>> for it.
>>>>
>>>>
>>>>> Are we talking about the same approach? so in summary...
>>>>
>>>>> Francois wrote:
>>>>>> Another possibility is to have FAIL override CANNOTTELL. In  
>>>>>> other words compute the test outcome as:
>>>>>>  If any of the subtests outcome is FAIL,
>>>>>>    then test outcome is FAIL
>>>>>>  else if any of the subtests outcome is CANNOTTELL,
>>>>>>    then test outcome is CANNOTTELL
>>>>>>  else
>>>>>>    test outcome is PASS
>>>>>> ... and about the same thing at the overall outcome level.
>>>>
>>>> +1 for this one. It's easy to implement and ensures the outcome  
>>>> value still means something:
>>>>  PASS: mobileOK
>>>>  CANNOTTELL: looking fine, but not everything could be checked
>>>>  FAIL: something's wrong (test may not have been run entirely)
>>>>
>>>>
>>>>> or are you suggesting that we go for your original suggestion  
>>>>> by including PARTIAL results?
>>>>>> In short, the possible outcomes for a subtest (the <result>  
>>>>>> element in the XML report) would be:
>>>>>>  - PASS, WARN, FAIL for subtests that can be run normally.
>>>>>>  - PARTIAL_PASS, PARTIAL_WARN, PARTIAL_FAIL for subtests that  
>>>>>> can only be applied partially.
>>>>>>  - CANNOTTELL for subtests that simply can't be run.
>>>>>>
>>>>>> The possible outcomes for a test would be:
>>>>>>  - PASS, FAIL for tests that can be completely checked
>>>>>>  - PARTIAL_PASS, PARTIAL_FAIL when there is a PARTIAL_* and/or  
>>>>>> CANNOTTELL in one of the subtests
>>>>>>  - CANNOTTELL when none of the subtests could be run (e.g. for  
>>>>>> CACHING)
>>>>>>
>>>>>> The possible overall outcomes would be:
>>>>>>  - PASS, FAIL when all tests can be completely checked (http/ 
>>>>>> https case)
>>>>>>  - PARTIAL_PASS, PARTIAL_FAIL where there is a PARTIAL_* and/ 
>>>>>> or CANNOTTELL in one of the tests
>>>>
>>>> -1 for that one, since it's more complex and the outcome value  
>>>> would then carry two orthogonal dimensions, which doesn't look  
>>>> that good.
>>>>
>>>> Francois.
>>>>
>>>>>>
>>>>> Regards,
>>>>> Yeliz.
>>>>> On 13 Feb 2009, at 13:57, Francois Daoust wrote:
>>>>>>
>>>>>> Yeliz Yesilada wrote:
>>>>>> [...]
>>>>>>>>
>>>>>>>> 3/ I think there is a useful distinction to be made between  
>>>>>>>> a subtest that can't be run because some data is missing,  
>>>>>>>> and a subtest that can't be run because it doesn't need to,  
>>>>>>>> i.e. if there are no objects in the page, the  
>>>>>>>> OBJECT_OR_SCRIPTS subtests de facto pass. The first  
>>>>>>>> possibility is what we're talking about. The second  
>>>>>>>> possibility may be of some use in the future (I'm not  
>>>>>>>> suggesting we implement it right now). In short, I would  
>>>>>>>> rather keep NOT_APPLICABLE to the second case, and use  
>>>>>>>> DATA_MISSING (I can't think of a better proposal, but the  
>>>>>>>> idea is to point out that the moki representation is  
>>>>>>>> incomplete) for checks on files.
>>>>>>> I agree. I think what Dominique suggested is a good idea:  
>>>>>>> using "cannotTell".
>>>>>>
>>>>>> Yes, good idea, thanks dom!
>>>>>>
>>>>>>
>>>>>> [...]
>>>>>>> I think we need to think "why one would like to know if a  
>>>>>>> *sub-test* passed partially or not?". For example, in our  
>>>>>>> application if a sub-test can only be checked partially, then  
>>>>>>> we have to use the Tester (URI) version to check that again  
>>>>>>> so it's enough to know that the a particular sub-test cannot  
>>>>>>> be tested.
>>>>>>> I am just not sure if these partial solutions would be useful  
>>>>>>> or not. I would rather prefer to keep the approach simple:
>>>>>>> subtests
>>>>>>> =======
>>>>>>> - PASS/FAIL/WARN that can run normally
>>>>>>> - CANNOTTELL if there is missing information
>>>>>>> tests
>>>>>>> ====
>>>>>>> - PASS/FAIL/WARN that can run normally
>>>>>>> - CANNOTTELL if any of the sub-tests return "CANNOTTELL"
>>>>>>> But do you still think it's important to have PARTIAL results?
>>>>>>
>>>>>> At the subtest level, there aren't so many subtests that are  
>>>>>> concerned. I think it would be useful to run  
>>>>>> EXTERNAL_RESOURCES-2 and -3 or PAGE_SIZE_LIMIT-2 to alert  
>>>>>> authors that their page is simply too big, because that's a  
>>>>>> core mobile limitation. That being said, it may be added in a  
>>>>>> second time.
>>>>>>
>>>>>> At the test level, I just think that we lose the ability to  
>>>>>> quickly point out the outcome of the test. You'll basically  
>>>>>> end up with the following in each and every check run on a  
>>>>>> file document:
>>>>>> <tests outcome="CANNOTTELL">
>>>>>>  <test name="CHARACTER_ENCODING_SUPPORT" outcome="CANNOTTELL">
>>>>>>   [list of FAIL/WARN/CANNOTTELL results]
>>>>>>  </test>
>>>>>>  <test name="CONTENT_FORMAT_SUPPORT" outcome="CANNOTTELL">
>>>>>>   [list of FAIL/WARN/CANNOTTELL results]
>>>>>>  </test>
>>>>>>  <test name="OBJECTS_OR_SCRIPT" outcome="CANNOTTELL">
>>>>>>   [list of FAIL/WARN/CANNOTTELL results]
>>>>>>  </test>
>>>>>>  [...]
>>>>>> </tests>
>>>>>>
>>>>>> Whilst totally correct, the overall outcome and the outcome of  
>>>>>> the tests mentioned above doesn't tell you whether there is a  
>>>>>> FAIL in one of the subtests or not. Sure enough, this can be  
>>>>>> sorted out by having a look at the list of <result />, but  
>>>>>> it's the same thing today: the outcome attribute is more a  
>>>>>> "visual" clue than a computational need (i.e. its value can be  
>>>>>> re-computed at any time by having a look at the list of  
>>>>>> results elements). By limiting ourselves to CANNOTTELL, we're  
>>>>>> dropping that "visual" clue: any report on a file will need to  
>>>>>> be parsed to see how many of the tests that could be run failed.
>>>>>>
>>>>>> Any tool can compute the corresponding PARTIAL_PASS/ 
>>>>>> PARTIAL_FAIL pretty easily, but I guess I just like the idea  
>>>>>> to still have a notion of PASS/FAIL at the test and overall  
>>>>>> outcome level.
>>>>>>
>>>>>> Or...
>>>>>>
>>>>>> Another possibility is to have FAIL override CANNOTTELL. In  
>>>>>> other words compute the test outcome as:
>>>>>>  If any of the subtests outcome is FAIL,
>>>>>>    then test outcome is FAIL
>>>>>>  else if any of the subtests outcome is CANNOTTELL,
>>>>>>    then test outcome is CANNOTTELL
>>>>>>  else
>>>>>>    test outcome is PASS
>>>>>> ... and about the same thing at the overall outcome level.
>>>>>>
>>>>>> Tools would still have to go through the list of results to  
>>>>>> tell which tests were only partially run, but at least,  
>>>>>> looking at a CANNOTTELL would tell you that "the thing looks  
>>>>>> great so far although not everything could be checked", while  
>>>>>> FAIL would keep its "something's wrong" meaning. It does work  
>>>>>> well provided that a FAIL in a file:// case always imply a  
>>>>>> FAIL in a http:// case, otherwise we would just raise a red  
>>>>>> flag that isn't a real one. I think that's almost true, except  
>>>>>> the uncertainty about <object> elements for computing included  
>>>>>> resources when files are handled. I can live with that.
>>>>>>
>>>>>> This could even be applied to subtests, and the net result is  
>>>>>> that we don't have to define more outcome values that carry  
>>>>>> orthogonal meanings.
>>>>>>
>>>>>> What about it?
>>>>>>
>>>>>> [...]
>>>>>>
>>>>>> Francois.
>>>>>>
>>>>
>>>
>>>

Received on Monday, 16 March 2009 08:11:15 UTC