RE: mobileOK Pro 1st Draft

Hi Dom, hi all

Dom, thanks for your feedback.

I think it might be wise to remind everybody that these tests require a
human, sitting on front of the content, thinking about it and making
This is not machine based testing.

> * Access keys: 
>         - "Where there are elements, particularly navigation links and
>         form controls, that would benefit from access keys:"
> How does one determine that there are such elements?

By looking at the content

>         - "if access keys are not indicated effectively":
> What does it mean to be indicated "effectively"? For 
> instance, is it enough to have a page on the site that lists 
> you access keys? or do they need to be indicated on the page itself? 

That depends on the context. In some cases link decoration may be
needed, in others a page where usage is explained or some other
Important is, that the user knows which access keys can be used or find
out about it.

>         - "If the usage of access keys is not consistent 
> across a given
>         page and site"
> How do you determine what constitutes a site? How many number 
> of pages in the site to you need to check to determine this 
> consistency?

Since mobileOK is based on a assertion made by a content provider or
author the scope of that assertion has be to be made clear.
POWDER is a vehicle by which this can be accomplished.
Consistent means without change, so everywhere within the scope.

> * Auto-refresh:
>         - "there is no link provided to another instance of 
> the content
>         which does not refresh"
> The difficult you'll encounter with this type of test is that 
> a link can be buried in the middle of a great number of other 
> links, or signaled with an unclear language, etc.

If that is so and to such a degree that the link becomes non-usable, the
assertion made is false.
This can be used to give feedback to the author to improve the page in

> * Avoid Free text
>         - "If there are one or more free text input fields: Could they
>         be converted into any of:"
> I think this test doesn't give enough criteria to decide 
> whether it could or not.

The test cannot a priori give criteria, as there are limitless potential
use cases. 
What an be done is give more examples.
Would that help?

>         - "if data has been entered previously"
> This is also going to be pretty hard to test...

I am not sure which test you are referring to

> * Background images
>         "Where there is a background image, if perceiving 
> content in the
>         foreground is not easy under normal daylight conditions:"
> Again, this seems too fuzzy; it's not clear what normal 
> daylight conditions are, nor what it means to be "easy to 
> perceive", and this also strongly depends on the device used 
> to make the test.

Precicely, which is why a human being has to make this test.
Perhaps here too examples would help.

> I could probably make similar remarks for most of the 
> following tests; generally speaking, while I think they are 
> certainly improvements to the existing "what to test" 
> sections in the BP doc, I think they remain too vague for 
> "minimiz[ing] the variance of results produced through 
> subjective tests" as the charter of the task force calls for.

The question is how far do we want to minimize variation?
I am sure that some of the tests could be tightened further, but here we
hope for some ideas from the group.

So, yes, please do make similar remarks for the rest of tests
Any suggestions for improvement are welcome.

-- Kai

Received on Friday, 7 March 2008 13:07:30 UTC