Re: EARL use scenario / technique

Well, the technique of recording an implementation is a further part of these
techniques (and is part of meeting WCAG checkpoint 13.2 which applies in a
few of the relative checkpoint cases), so I guess I should outline some
techniques for it.

I agree that it would be good to outline some other, less controversial
checkpoint techniques too, but I was hoping that people could adapt what I
have provided and do that - it seemed easy pickings.

Herewith, a technique for recording an EARL evaluation in general:

Record an EARL evaluation as a bookmark, by POSTing it to an annotea server.
@@action Charles, provide a code sample for the POST body. This is something
I need to check on, but it is basically a question of creating the right
wrapper syntax around the EARL code.

In an offline tool, or working offline, record the EARL locally as an
annotation, and when the document is pubished, POST the EARL to an online
server, adapting if necessary the URIs referred to.

For example, a local annotations store might be kept in
file:///user/charles/.earl (registering this in a system would be helpful to
working offline with multiple tools, or publishing the location  used by a
tool would be helpful to allowing other tools to be compatible).

Annotations might be made about
file:///user/charles/documents/somesite/subarea/document1234567.xml in the
process of editing this document.

If the document is then published to
http://example.org/somesite/dfg/wqertyuiop.xml filter the EARL comments
stored locally to replace the local URI with the new one, and post the new
version to an online server.

Optionally, a tool may delete superseded EARL comments, or even do a complete
merge, before publishing the EARL associated with a page.

end example.

Chaals

On Mon, 22 Oct 2001, Phill Jenkins wrote:

  > What are the specific techniques here?
  >
  > record the reading level of content at each check. If the level
  increases, a
  > tool may record a suspicion that the relevant checkpoint is not met. If
  the
  > author is asked explicitly, it may record that the author claims the
  > checkpoint is met. For abetter implementation, only do this after
  providing
  > some repair attempt.
[snip]
  I think the real technique is annotating or having the tool record that the
  author checked something and it should be remembered - using EARL as a
  language to record it.  I'd call it "Annotating with EARL".

  In the example I would like to suggest examples that are more priority 1
  and not controversial [more usability and less technical accessibility]
  ones like reading level.  For example, using alt="" - null string for
  spacer and redundant images should be recorded as "valid" by the author.
  Another example would be tables used for layout and real data tables.  The
  layout tables could be recorded as "layout" and not checked anymore for
  TH's, captions, or headers=.  The tool could also record that the "layout
  table" was "linearized" and verified that it "looked" OK to the author.  In
  other words all the P1's that require or could benefit from some human
  judgement annotations.

  Regards,
  Phill Jenkins,  (512) 838-4517
  IBM Research Division - Accessibility Center
  11501 Burnet Rd,  Austin TX  78758    http://www.ibm.com/able


-- 
Charles McCathieNevile    http://www.w3.org/People/Charles  phone: +61 409 134 136
W3C Web Accessibility Initiative     http://www.w3.org/WAI    fax: +1 617 258 5999
Location: 21 Mitchell street FOOTSCRAY Vic 3011, Australia
(or W3C INRIA, Route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex, France)

Received on Tuesday, 23 October 2001 07:44:47 UTC