RE: [moved] Next meeting: 27 August 2014

Dear Group,

Below are some reactions to the comments received.

In general: Phew, it seems we have quite a bunch of things to consider...!
Now, many comments would need discussion of subtle aspects, which seem more
appropriate to be dealt with when we eventually go through the detailed
disposition of comments. But there are some relevant topics that already
merit stopping on them.

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Jul/0005
* On repeated references to EARL, which is seldom used: This hits a raw
nerve, but there is something to learn from that comment. First, it seems
clear that outputting evaluation results into a standardized reporting
language is a potential (and desirable) feature of an evaluation tool.
Second, it seems EARL is the most mature such format. Third, ERT is both the
author of WAET and EARL, so there might be some subjectivity on recommending
to use EARL. That said, I would not feel comfortable with removing
recommendations of EARL, but the commenter reveals an issue that should be
addressed somehow: we are recommending something which is not widely
deployed and it is not yet a Recommendation (and as such, compatibility by
two interoperable tools has not been proved) (Besides, at least another
commenter requested a list of tools which support EARL).
* On CSS and Javascript being different to other content types: True,
moreover, several other commenters have also spotted this issue. We are not
even suggesting CSS and JavaScript to be evaluated on their own (which could
be possible, but of limited utility), but it seems that grouping them at the
same level as HTML pages, PDF, images and multimedia files might confuse the
readers.
* On accessibility testing of images: I am not sure if this is a potential
new feature not yet listed, or a content type, or simply a specific test
case (which would be out of scope); or if, quite the opposite, it should be
stepped out. It is true that tools can theoretically analyze images;
moreover, I remember reading about a tool which automatically detected
foreground and background on images and analyzed its contrast; it was aimed
at medical images, does anyone remember its name?

 - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0000
[Missing from Shadi's original list]
* There are a few comments related to ATAG 2.0, maybe we should have a
deeper look on it, to ensure we keep aligned.

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0001

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0002
* On emulating the experience of people with disabilities: Even though
"disability simulator" is a misleading term, some accessibility evaluation
tools do provide features that render the content in different ways, so that
the evaluator can assess how a user under specific conditions would perceive
the content. I remember previously discussing this on a call, but I cannot
track it down to check why it was not finally included.

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0003
* On removing "developers" from the title: we have a wording issue here.
This guide mainly targets people who are responsible of creating evaluation
tools, whatever their role might be (commissioner, project manager, coders).
However, some commenters had trouble in understanding it that way (another
commenter complained at talking about "management", which in their opinion
is not performed by developers).

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0004
* On adding "platform" as a feature: strongly agree (I usually call it
"deployment"); currently we are just providing some hints regarding workflow
integration.
* Interesting comments on several, potentially new features related to
crawling
* Some comments on nice-to-have features: should we stick to features
already implemented or can we propose "ideal" features? 
* On splitting crawling and testing, and different test environments: The
test environment alternatives presented by the commenter were indeed
mentioned by Carlos when introducing Tool B; it seems this might be a
usually known concept (another commented also talked about the different
degrees of artificial intelligence that could be embedded in a tool).

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0005
* By the way, I just caught an editorial issue at the end of 2.4.2
(Localization and Internationalization): "like for instance, such as those
related to readability". Either "for instance" or "such as" are redundant.

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0006

  - http://lists.w3.org/Archives/Public/public-wai-ert-tools/2014Aug/0007
* Prescription versus description: I would reassert this document is
descriptive, not prescriptive; that is, it merely lists the features an
evaluation tool might have. Nonetheless, it would be great if an evaluation
tool implemented most of them in many ways, but this is never prescribed.

Regards,

Samuel.

Received on Wednesday, 20 August 2014 15:31:47 UTC