- From: Alain LaBont/e'/ <alb@sct.gouv.qc.ca>
- Date: Wed, 5 Feb 1997 13:54:21 -0500
- To: Johan Zeeman <zeeman@fox.nstn.ns.ca>, iso10646@listproc.hcf.jhu.edu, Unicore <unicore@unicode.org>, Unicode <unicode@unicode.org>, www-international <www-international@w3.org>, HTTP WG <http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com>, Search <search@mccmedia.com>, ISO10646 <iso10646@listproc.hcf.jhu.edu>
At 11:55 97-02-05 -0600, Johan Zeeman wrote: >At 11:31 05/02/97 -0500, Alain LaBont/e'/ wrote: > >>Anyway the logic, one the source data has been normalized, should be the >>same after all. I am pretty sure nobody uses UTF-8 or even entity names as >>its canonical processing encoding... That would be a nonsense. But who >>knows, masochism exists, I know (: >> [Johan]: >Well ... in our bibliographic database, we intend to store UTF-8 in the >database on the server, and have the client applications transform to 16-bit >representations for processing. When a non-ASCII character is present maybe >once in a hundred characters, the saving in storage is significant. > >My concern with delivering UTF-16 over http is not so much with the browser >as with the other applications the document may be passed to. Think of all >the folks who still use WP5.1 because they are comfortable with it. Brilliant case in point, WP 5.1 uses 16 bits internally, it never works with the external character set (; In fact it is an example to follow, a superior technology as far as characters ets are involved, no conversion is ever necessary when you change character set, believe it or not! Alain LaBonté Québec
Received on Wednesday, 5 February 1997 11:02:39 UTC