W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > April to June 2001

Re: What does 'look for semantics' mean....

From: Al Gilman <asgilman@iamdigex.net>
Date: Sat, 28 Apr 2001 16:26:43 -0400
Message-Id: <200104282022.QAA12296465@smtp2.mail.iamworld.net>
To: w3c-wai-ig@w3.org
At 02:22 PM 2001-04-28 +0100, David Woolley wrote:
>
>> and also if this ability to 'look for semantics' is
>> used in any current 'User Agents'.
>
>IE4+ and NS6 use them whenever the user supplies a style sheet.
>

AG:: Yes, stylesheet access is the primary current case where browsers
actually
follow a reference to "more semantics" -- in this case presentation semantics
-- from an HTML document.

And this uses the LINK element, not the DOCTYPE indication.

The other mechanism in HTML for linking to more semantics is via the PROFILE
attribute but this is not used much.

The above statements hold for all versions of HTML up through XHTML 1.1, so
the
use of document type indications as a vehicle for linking to "more semantics"
is largely hypothetical for HTML at the present time.  But not forever.

The leading case of a technology that will involve [possibly optional]
client-side schema-aware processing in the near future is XForms.  There, data
validity checking can and I suspect sometimes will be done in the client by
schema-aware processing based on the schema used to define 'legal' results for
filling out the form.  This saves the web content author from writing and
shipping data-validation scripts for each form that they write which deals
with
a given schema.  They have the option of publishing the schema and letting the
client do data validation by interpreting the schema.  Some of us in the WAI
are enthusiastic about this prospect because the schema provides a better
basis
for use of alternative fill-out modes than does a screen-specific form design.

Another instance where [I think] User Agents may well selectively access and
apply 'more semantics' in the near future has to do with pronunciation.  The
Voice Browsing community is actively pursuing a public convention for defining
and sharing pronunciation-enabled glossaries.  While voice portals may be the
economic force behind the publication of such glossaries, they are useful in
text-to-speech at the client side as well, and I hope we may see screen
readers
acting on this information when available. 

There is a lot of research and development work (that a lot of people are
interested in) on schema-aware processing in data mediation middleware. 
Middleware is not client software, but it can be brought to bear at the
request
of either the user or the publisher, and so is not necessarily excluded from
what we mean when we say "the user agent can follow this to get at more
semantics." 

The basic use case for "more semantics" is whatever the User Agent does in
response to a "Hunh???" query from the user.  This is what will supercede the
'Help' menu item.  The existence proof for such services is the GuruNet
Alt-Click accessed vocabulary assistant available from atomica.com.

This "Hunh???" user action is closely related to the "try harder" escalation
request that is a central feature of the services spectrum that Trace is
pushing with the developers of advanced internet technology.

Modality Translation on Next Generation Internet
<http://trace.wisc.edu/handouts/modality_translation>http://trace.wisc.edu/
handouts/modality_translation/

If a page explicitly refers to a [pronouncing] glossary, it is unlikely that a
service such as GuruNet would be able to ignore the reference, although the
first-pass presentation processing of a page in a screen environment would
almost certainly fail to follow this link.

Al  
Received on Saturday, 28 April 2001 16:22:23 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 19 July 2011 18:13:54 GMT