- From: Khaled Hosny <khaledhosny@eglug.org>
- Date: Thu, 4 Aug 2016 00:05:26 +0200
- To: "Amir E. Aharoni" <amir.aharoni@mail.huji.ac.il>
- Cc: Richard Ishida <ishida@w3.org>, "public-i18n-core@w3.org" <public-i18n-core@w3.org>, "public-i18n-bidi@w3.org" <public-i18n-bidi@w3.org>
On Wed, Aug 03, 2016 at 03:16:16PM +0300, Amir E. Aharoni wrote: > Automatic detection works in chat mobile apps and social networks like > YouTube, Twitter and Facebook is not perfect, but usually it works > surprisingly well. But every app implements it separately. In general it > seems that it mostly works by counting characters or words. Making one of > these algorithms standard would be far better than standardizing > first-strong. It's unfortunate that first-strong was picked for HTML's > dir="auto", too. The method used by Twitter (the percent of LTR or RTL characters) is far from perfect either. It is very unpredictable unless one can count the characters in his head, and inflexible; what if this paragraph that has more LTR charters than RTL is actually a RTL paragraph? How can one override the automatic detection? Not to mention that it does not fix the case of all neutral paragraphs (which is not uncommon). I find first strong algorithm to be both predictable and flexible, though using control characters is PITA in most applications.
Received on Wednesday, 3 August 2016 22:36:29 UTC