W3C home > Mailing lists > Public > public-i18n-geo@w3.org > September 2004

Comments on GEO Test

From: Jeremy Carroll <jjc@hplb.hpl.hp.com>
Date: Mon, 06 Sep 2004 20:00:46 +0100
Message-ID: <413CB3DE.1080609@hplb.hpl.hp.com>
To: public-i18n-geo@w3.org
CC: www-qa@w3.org


I've been browsing the GEO test work (on-going) at


It's good to see so much work being done in this area.
In particular, it seems that you are being very thorough in identifying 
language related features tha should be tested, and some idea of what 
tests can be made.

I had some very general comments trying to combine what I've understood 
from the QA WG's work with these pages.

The principle problem I found was it was too unclear who was meant to do 
what with these tests.

In the QA WG's framework, testing is about testing some 'class of 
product' against some 'conformance statement'.

Since these tests seem to be about language in HTML, XHTML and CSS, I 
suggest that:
- the 'class of product's are two:
    + user agent - typically a traditional web browser
    + a web page (HTML, XHTML) or CSS stylesheet, or possibly a web 
site, when multiple pages are being considered

- the 'conformance statement' is the (implicit?) conformance  statement 
of being a user agent or being an HTML/XHTML/CSS page, but then only 
really looking at the specifics of language related conformance.

To make the tests easier to use, it is important to identify who might 
use them, and for what.

I see four classes of user:

- a user agent developer, trying to ensure that their work does handle 
language features correctly

- a user agent consumer trying to choose a user agent that best handles 
the language features that are important to them

- a web page author (whether human or automated) trying to ensure that 
their work correctly marks up language related aspects of the content

- a web page consumer trying to understand whether their language 
related difficulty with some site is because the site is broken or their 
user agent is broken

I envisage testing being done with some sort of test harness that 
co-ordinates running the tests, and generating reports.

I imagine these test users operating in two modes:

A) testing a user agent on a large number of simple tests, which should 
each pass or fail. Given the graphical nature of a traditional user 
agent, it is implausible to fully automate this procedure, which will 
require manual assistance for almost all the tests.

B) testing a web page, for a number of simple features, each of which 
passes or fails. In the main such tests can be automated, such as in the 
HTML validator or the pubrules checker.

http://www.w3.org/2001/07/pubrules-form (member only link, I think)

A) Testing a User Agent

The person conducting test needs to give the test harness manual 
feedback as to whether each test passes or fails. e.g. a test for the 
<link> element with content in alternative language will pass if the 
user agent provides some indication that alternative content is 
available, and fail if not.


But that page has a lot of information that confuses this simple test.
Including some of the HTML source as verbatim within the content is 
unnecessary and confusing (there is always 'view source' for the geeks)

The questions at the end of the page, which are in fact the things to be 
tested, do not indicate the 'right' answers, e.g. "Does the user agent 
provide information about all the links in the markup?" without looking 
at the markup, how do I know? And why should I look at the markup?

The question should be something like:

Is a link to a page in FR displayed?
Is a link to a page in ZH displayed?

Ideally, web forms could be used to collect answers to these tests, in 
order to generate a report. This report could be generated either on the 
server or on the client. Thus the questions would appear

Is a link to a page in FR displayed? [YES] [NO]

Essentially running these tests will be boring, and the test framework 
should make this boring task as quick and easy as possible. No clutter, 
support for collecting the results, simple questions that do not require 
engagement of brain.

Many groups have RDF based formats for collecting test results, and 
there are a number of XSLT style sheets etc. that then format these 
results in an attractive way.

B) Testing a web page

The pubrules checker is very helpful for checking that W3C tech reports 
follow the W3C pubrules - each of the rules has some XSLT code to check 
it, and the result is then displayed on the summary page in green, red, 
or yellow - where the yellow is an undetermined result requiring human 
thought to decide.

I realise that the GEO TF does not have a lot of effort to dedicate to 
the, essentially programming, tasks envisaged here. However, clarity 
that the ideal long term goal is something like what I've sketched (or 
some other overall test framework agreed by the GEO TF), would, I think, 
allow for tests that were more focussed on testing, and were easier to 

Hope this is helpful

Some members of the QA IG may be able to point to appropriate overview 
material on planning for testing, it is not clear to me which would be 
the most helpful entry point into their work (some of which is listed below)


> * The QA Handbook *
>     http://www.w3.org/TR/2004/WD-qa-handbook-20040830/
>     A short guide to help a WG to organize its life. Chairs and Staff 
> Contacts are most likely to be interested. Though anyone can read it as 
> well.
> *The QA Framework: Specification Guidelines*
>     http://www.w3.org/TR/2004/WD-qaframe-spec-20040830/
>     A guide to help you to go through all the problems you might 
> encounter creating a technology and writing a specification. Some of 
> these guidelines will be obvious for you, but a few others might raise 
> new isuues you had no opportunity to think about. A checklist (ICS) has 
> been given with this document to control if you have forgotten or not to 
> explore a topic.
>     http://www.w3.org/TR/2004/WD-qaframe-spec-20040830/specgl-ics
> *Variability in Specification.*
>     http://www.w3.org/TR/2004/WD-spec-variability-20040830/
>     When designing a technology, you might face very difficult topics, 
> with very deep ties to conformance and interoperability. If you need to 
> explore further advanced topics, we recommend you the readin of 
> variability in specification.
> * The QA Framework: Test Guidelines *
>     http://www.w3.org/TR/2004/WD-qaframe-test-20040820/
>     This document is put on a hold, due to the lack of resources of the 
> QA WG. If more manpower was joining the WG, we might have the 
> possibility to finish it.
Received on Monday, 6 September 2004 19:01:14 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:28:01 UTC