- From: Shyjan Mahamud <mahamud@marr.ius.cs.cmu.edu>
- Date: Sun, 30 Jul 2000 01:33:44 -0400
- To: www-font@w3.org
hello, i am trying to better understand the role of the unicode-range descriptor in the @font-face rule. as far as i can tell, this descriptor is primarily intended to avoid downloading fonts that don't contain glyphs of interest. however there are applications where you would want to create "virtual fonts" from a bunch of fonts. the example "Excelsior" shown on http://www.w3.org/TR/REC-CSS2/fonts.html is close to what i am thinking of, but i need some clarification. suppose you are defining a new virtual font "vfont" from two real fonts "rfont1" and "rfont2". assume we want the unicode points U+00-FF from "rfont1" and U+100-200 from "rfont2". adapting the "Excelsior" example to this case, we would have something like : @font-face { font-family: vfont; src: local(rfont1), url(...); unicode-range: U+??; /* Latin-1 */ } @font-face { font-family: vfont; src: local(rfont2), url(...); unicode-range: U+100-200; /* Latin Extended A and B */ } all this is clear. the question is what happens when the real fonts rfont1 and rfont2 have overlapping unicode points, say for simplicity they both have U+000-200. i would like characters in U+100-200 to be picked up from only rfont2 and not rfont1 even though a user agent can optimize by using only rfont1 for the whole range. i want to know if this semantics is implied by the unicode-range descriptor. it is not entirely clear from the CSS2 spec that this is the case since the discussion there seems to be more devoted to avoiding having to download fonts unnecessarily. in other words i am less interested in specifying the actual range that a font contains, but rather i am interested in specifying a subset of the actual range. - shyjan
Received on Sunday, 30 July 2000 01:34:13 UTC