- From: Masafumi NAKANE <max@wide.ad.jp>
- Date: Mon, 13 Nov 2000 18:24:15 +0900
- To: Ian Jacobs <ij@w3.org>
- Cc: w3c-wai-ua@w3.org, w3c-i18n-ig@w3.org
Hi,
Since I mentioned my concerns about UAAG to Martin a few days ago,
they are basically covered in his comment. Here's followup to Ian's
note.
On Sat, 11 Nov 2000 22:37:53 -0500, Ian Jacobs wrote:
> =============================================
> Charset information
> =============================================
> IJ: If it affects everyone equally, then we have a convention
> not to include such requirements in such a document.
[snip]
> MD: Each API probably has its own specific character encoding.
> For instance, in Japan, there are three widely-used encodings.
> When the UA passes information through an API, it must ensure
> that the information is converted per the encoding rules of that
> API. [The DOM is a special case of this.]
[snip]
> Actions:
> - Talk to Masafumi about what happens when a UA
> doesn't get the proper character encoding information.
If the UA gets the encoding info depends on how the document is
provided, and thus it's an issue of the server or the meta info in
the document.
The important thing here is that UA should ensure to convert the
document into the encoding that is used by the APIs receiving the
document from the UA. Or, at least, include the charset info when
it is passed to the API so that it can be converted to appropriate
charset.
Lacking this feature, the rendering parts have to guess the charset
and convert the document. It's not necessarily true that
auto-detection always succeeds. When there are both visual and
speech output, for example, it is possible to have visual rendering
done correctly while speech output gets complete mess if conversion
is performed by each rendering engines separately. (I have an
experience that UA displayed complete garbage on the screen while
the screen reader was reading it just like normal text, and I didn't
realize the problem in the document until I had a chance to work
with someone with sight.)
> MD: "case" (e.g., uppercase) is a special case. If you translate
> to braille, you lose the distinction between katakana and
> hiragana.
>
> Action MD:
> - Verify this with Max Nakane.
Yes, in general. It's not true when user uses braille
representation that's not used commonly. (There are a few Japanese
braille systems that are designed for more accurate braille
translation, but are rarely used.)
> =============================================
> 7.5 Search
> =============================================
[snip]
> IJ: Our limits are:
> - Search for text characters in source
> - Search only on those that have been rendered (or
> will be, through a viewport)
What if a user is reading a Japanese text through Braille output?
On the braille display, all the kanji letters are converted into
kana.
To be realistic, I don't think making UA support different search
mechanism is a right approach. Instead, screen readers should
provide a functionality to enable users to identify each character
as needed. (Similar to the phonetic announcement of a letter in
English screen readers.) I'm not too sure if this is within the
scope of UAAG, but since definition of UA includes AT, it may be.
Cheers,
Max
---
Masafumi NAKANE, World Wide Web Consortium (W3C)
[E-Mail]: max@w3.org / max@wide.ad.jp
Received on Monday, 13 November 2000 04:24:20 UTC