W3C home > Mailing lists > Public > www-svg@w3.org > August 1999

Re: SVG Font criticism

From: Raph Levien <raph@acm.org>
Date: Sun, 15 Aug 1999 18:01:30 -0700
Message-ID: <37B762E9.FD339240@acm.org>
To: Chris Lilley <chris@w3.org>
CC: Raph Levien <raph@evilplan.org>, www-svg@w3.org
Thanks, Chris, for the prompt and substantive response to my comments.
It's refreshingly uncharacteristic :)

Chris Lilley wrote:
> 
> If you have a technical proposal on how hinting can work in a
> resolution-independent way for glyphs which can be arbitrarily rotated
> and skewed and thus will not line up with an axis-aligned pixel grid, I
> would be very interested to see it. Existing hinting mechanisms tend to
> assume particular pixel sizes for the rendering, and do not work with
> rotated characters. Which is not to say that its impossible, of course,
> just that we don't have a technical proposal before us to consider.

It seems like most of the computational geometry needed can be found in
John Hobby's PhD work on Metafont. In my personal opinion, hinting is
far more important for fonts rendered in a rectilinear grid than in
skewed or rotated coordinate systems, both because it is more common and
also because the eye is most sensitive to bad hinting on horizontal and
vertical strokes. Nonetheless, if constant-width lines on rotated
characters is a requirement, Hobby's work is a good place to start. The
most complete and accessible presentation is in [2].

> I think you should spend a little time looking at the CSS2 spec before
> accusing it of being "sheer bullshit".

I have read the CSS2 spec on fonts fairly closely, and I stand by my
strong language. I will expand a bit on my thoughts, however.

> For example, I see nothing particularly wrong with
> 
> font-family: "Some Obscure Font", "Some widespread font", "My SVG font",
> sans-serif

Well, I do :)

I'm assuming here that we're in the space of "conformant" files and
viewers, i.e. the expectation that the file will render substantially
identically on all viewers. For the purposes of this discussion, I am
considering hinting optional, though.

Thus, conformant files must _always_ include an SVG font. Further, in a
conformant file, any reference to a non-SVG font must have identical
metrics, identical unicode to glyph mapping, and substantially identical
glyph shapes.

This is a reasonable set of criteria, in my opinion, but it _does_
require that the viewer's interpretation of font formats be
well-defined. Particular trouble spots include the fact that PostScript
Type1 fonts do not contain unambiguous character to glyph mappings, the
fact that the CSS2 WebFont mechanism contains no well-defined way to
load both .pfb (glyph shapes) and .afm (font metrics) files for a Type1
font, and, in the TrueType/OpenType universe, the fact that these file
formats contain many optional features not supported by SVG fonts.
Conformance would require either that viewers explicitly disable these
optional features, or that font files themselves are limited to a lowest
common denominator of features with SVG fonts.

> "My SVG font" would be in SVG and the others would be in whatever
> alternative font technology, such as TrueType, Type 1, whatever, the
> implementation was able to process. The font-family declaration doesn't
> mandate the format to be used.
> 
> > More honest language for this draft would be:
> >
> >    SVG fonts contain unhinted font outlines. Because of this, on many
> >    implementations there will be limitations regarding the quality and
> >    legibility of text in small font sizes. There is no way to create
> >    conformant SVG files using higher quality hinted fonts. For
> >    increased quality and legibility in small font sizes, content
> >    creators may want to use an alternate format other than SVG.
> 
> This is factually inaccurate. It is possible for a renderer to use high
> quality hinted fonts, if it has any available.

Again, I stand by the language, emphasizing the word "conformant."

Even given that the Working Group is serious about a best-practice
interoperable spec, these gaps indicate an unwillingness to tackle the
difficult issues that providing _both_ quality and interoperability
raise. In my opinion, the only serious technical advantage to allowing
optional font technologies is the ability to add hinting. One way to do
this without adding any interoperability concerns is to add hinting to
the SVG font format itself, as I already proposed.

> >    The Adobe charstring format dates back to 1985, and has acquired a
> > large body of knowledge and font design tools. Over its evolution, it
> > has acquired the features needed for rendering a wide range of
> > scripts, including high quality CJK font rendering. In addition, there
> > are no known intellectual property constraints, and many excellent
> > free tools exist to parse, generate, and manipulate charstrings.
> >
> >    It is most ironic that a free software developer is pushing Adobe
> > technology on a specification,
> 
> Yes, it is somewhat ironic. Its still potentially a good solution, if
> the IPR issues can be declared non-existent by someone in a position to
> authoritatively so declare them and if the idea of having two,
> completely different ways to specify a Bezier curve, one in XML and one
> not in XML, can be justified for an XML specification, and if
> implementors are positive in response to the proposal.

By all means get a positive response on the IP. Incidentally, while I'm
on the topic, I'm in the process of obtaining a patent on the use of
cubic beziers in interactive Web graphics. I'm sure SVG implementors
will be interested in contacting me about licensing.[3]

Also, I got a chuckle out of the notion that the d attribute of the
<path> element "is" xml, while a hex-encoded charstring "is not" XML.
Unless you have some kind of magic XML pixie dust, I would consider them
both highly non-XML syntaxes for concisely encoding cubic bezier paths.
You could even define a DOM API for charstring data completely
compatible with the one for path data, with suitable extensions for
hinting, if desired.

> > but that is exactly what I'm doing
> > here. The <glyph> element should be allowed to include hex or base64
> > encoded type2 charstring data (the difference is basically a 50%
> > difference in uncompressed file size, either way probably much more
> > compact than svg path syntax).
> 
> Thanks. Its good to see an actual concrete proposal here.
> 
> So, you are suggesting that all conformant implementations should be
> required to parse and display hinted Type2 glyphs. What sort of hit
> would that be on code size and implementation complexity, would you
> estimate? For example, does Java2 provide any help here/ Is there code
> to do this as part of common OS-s that could be called to help with
> this, or would it require implementation from scratch? Lastly, what
> would the rensdering look like?

Well, I think the interpretation of the hints could safely be left
optional. More importantly, content creators would be able to _create_
conformant SVG files that could be rendered with hinting, at least by
some implementations.

A charstring to bezier path translator (without hinting) is roughly 500
lines of C code. The relevant section from Gill is in Gnome LXR at:

http://cvs.labs.redhat.com/lxr/source/gill/gt1-parset1.c#2248

I would imagine that the Java hackers could get it in substantially less
code. Here's a Perl implementation I dug up in a quick web search:

http://tug.org/ListsArchives/pdftex/msg00921.html

OS support obviously depends on the platform. If I understand correctly,
recent versions of Adobe Type Manager do contain api's for obtaining
bezier path data from type1 fonts. I'm not sure if it's possible to call
these with raw charstrings, or whether the api's require entire font
files.

At least three other high quality free implementations exist: in
GhostScript (available under GPL license), in the Type1 font renderer in
XFree86 (under a highly unrestrictive X license), and in t1lib, which is
an adaptation of the X one (LGPL, I think). There are others available
if you do some digging (including one in Python, in the Gnome Sketch
program).

I don't understand the question about what the rendering would look
like.

> I think the major use of SVG fonts will be for larger point sizes, and
> having seen some text converted to curves and displayed, with correct
> antialiasing and correct gamma control, I think the results can look
> very good. I have seen far worse font rendering in commercial systems,
> such as the poor Type 1 rendering in most X implementations. Its also
> possible to list an SVG font as a fallback to only be used if a
> particular rendere is unable to locate the higher-preference font
> families, which would be in whatever font format the particular platform
> supported.

I see no reason to believe that large fonts will be the norm. For
example, in GIS applications, extremely high quality rendering of fonts
in the 10-pixel range is an absolute requirement.

Hinting and antialiasing are two more-or-less orthogonal issues. The
TypeSolutions demos may help clarify this:

http://www.typesolutions.com/

Hinting becomes less important with antialiasing turned on. This is
because antialiasing guarantees correct rendering of curves and diagonal
segments. However, rendering of straight horizontal and vertical
segments is quite a bit crisper (the edge is more likely to align with
pixel boundaries) with hinting turned on.

I agree that in the 18px-and-up range, antialiasing, with or without
hinting, is far superior to non-antialiased rendering.

Hinting is also important for printing devices in the 300 dpi range, and
makes a subtle but noticeable quality improvement in the 600dpi range.

> >    My other major criticisms have to do with i18n. Specifying glyphs
> > in terms of unicode sequences and using longest-match semantics for
> > choosing ligatures makes sense, but is obviously inadequate for
> > complex scripts.
> 
> Yes, in the limit, any technology is inadequate for complex scripts.
> There is always one other script, like Mayan or Rongo-Rongo, that needs
> special rules.
> 
> On the other hand, the 80-20 rule can give substantial benefit for
> moderate cost. Take a look at the OpenType spec, then take a look at the
> Arial Unicode font and the Tahoma font and the Lucida Sans Unicode font
> and see which OpenType features they have. Arabic and Han.
> 
> The SVG font spec deals with what those industry fonts can do -
> ligatures, arabic contextual forms, unihan disambiguation, bidi,
> vertical text. Thats a fair chunk of functionality. Yes, support for
> Indic scripts is not there. But it covers a good bunch of needs and
> aligns well with practical, real-world industry attempts to hit the
> middle ground between "just English" and "everything possible".
> 
> > This is not to say that complex scripts are
> > impossible, just that they will generally be rendered a character at a
> > time using the altglyph property, explicit x,y positioning, etc. This
> > criticism also applies to placement of diacritical marks.
> 
> Yes, its a good criticism and there is clearly room for incremental
> improvement here in future versions. We should say ore about diacritics,
> although the impact of the W3C character normalisation model helps us
> out here a lot.
> 
> >    The major consequence of this is that in many scripts, SVG text
> > will basically be uneditable, as well as hugely expanded in file size.
> 
> In many *scripts*, yes. All the ones that current browsers don't even
> attempt to display. But remember the 80:20 rule and apply it to
> populations of Web users. For the majority of text on the Web today,
> existing either as text or rendered into little GIFs, SVG will deal with
> it.
> 
> Text in all western and eastern European languages, in Greek, in Hebrew,
> in Arabic, in Japanese and Chinese and Korean, in cyrillic scripts, in a
> bunch of other scripts such as Native American scripts (Cree, Navaho,
> etc) all of that will be editable and not expanded in file size.
> 
> Thats a significant market.

Good point.

> Thats a significant capability for making graphically rich, well
> internationalised illustrations, and it takes the industry enough on
> from current capabilities while not trying to take it too far at once.
> 
> The alternative to doing this is that text in any language other than
> English requires converting the text to curves, thus loosing all
> editability and searchability. The SVG font capability allows the
> outlines to be stored, but also allows the text to be stored and to
> remain editable.

Well, there are other alternatives, but I basically agree with what
you're saying here. I'd like to see language acknowledging these
restrictions, and perhaps providing recommendations on the best way to
deal with the tricky scripts. If present practice is any guide, a lot of
SVG creators will be tempted to do non-latin languages encoded in the
latin1 range. Creators should be encouraged to sacrifice editability
when necessary, but not searchability (i.e. correct encoding of the
source text in Unicode).

> >    I think "isolated" is more usual terminology than "standard" as a
> > value for the "arabic" attribute.
> 
> Yes. Isolated is the term I have generally heard used most often and is
> the term used in Daniels and Bright[1] which is as close to definitive
> as a general work can be. OpenType uses the term standard.
> 
> > and Calling this attribute "arabic"
> > may not be wise - Syriac (in the Unicode pipeline) has exactly the
> > same structure of contextual forms.
> 
> Yes. Manchu has the same structure and Mongolian has similar structure.
> Again. Arabic is what OpenType calls this feature. Suggestions for a
> better name which is mor einclusive, which still being readilly
> understandable, are welcome.

Ok, if these names are consistent with OpenType, I have no objection to
them.

> >    I do not understand the value in tagging the locale for glyphs in
> > the han range. To mix different CJK languages, it makes the most sense
> > (to me) to simply use different fonts, subsetted as necessary.
> 
> "Use different fonts" is one approach. Equally, most anything can be
> done with using separate fonts. But recall that the majority of the
> glyphs are actually the same, and can be shared; one only needs to
> indicate the ones which are different.

Uhm, but how is the renderer directed to select the appropriate
locale-specific glyphs as needed? I'll also point out that the various
options for the "han" attribute are _not_ consistent with the ISO
language tagging adopted by XML.

> > Similar issues exist for the Arabic and Cyrillic locale variations,
> > but only the "han" attribute exists.
> >
> >    The exact interpretations of some tags are quite badly
> > underspecified. One can easily imagine that the "arabic" contextual
> > forms are to be interpreted according to the Unicode rules, but
> > nowhere is this explicitly stated.
> 
> OK, thanks. Yes that was the intention and I couldn't imagine anyone
> drawing any other conclusions but I agree that more explicit language
> would be helpful.
> 
> > Do other sets of Unicode rules also
> > apply, for reordering in complex scripts, for example? For composition
> > of Korean Hangul? This needs to be very explicitly stated.
> 
> Composition is what happens to characters. I would expect a Korean font
> to supply glyphs for each precomposed hangul used. Of course, the
> use/symbol feature can be used to good effect with composite glyphs such
> as this (and also to break Han glyphs into radical strokes which are
> re-used, for example).

Ok, it's clear to me that the ligation rules subsume the Hangul
composition rules, if appropriately applied.

Incidentally, I think an ambiguity still remains in the ligature rules.
Given the ligatures "12" and "2345", should the string "12345" be
rendered: 12 3 4 5, or 1 2345 (i.e. greedy vs. lazy longest matching)?

> Thanks for your technical feedback, which was appreciated, although most
> of the issues you raise had in fact already been discussed internally,
> its good to see you bringing the same issues up and coming to similar
> conclusions.

Well, that's certainly a nice side effect of the closed process - having
to hash out all the controversial stuff twice.

I had some more thoughts since yesterday. The ability to include
arbitrary svg (including images) in glyphs leads to some neat
possibilities, but the requirement to support DOM-based updating adds a
significant burden to the implementation. In Gill, for example, the
<text> element is implemented with a text element, for which caching and
other optimizations can be readily deployed. With the new spec, I
basically have to treat it as a lot of <use> instances, which of course
affects performance. To auto-negotiate between the common case (plain
cubic bezier paths) and the general case adds even more complexity.
Perhaps it would be better to disallow DOM updating of fonts and glyphs?

Criticisms I have in the queue:

1. I believe that the animation facilities are redundant with DOM-based
animation. The SVG working group may have the dubious honor of
introducing the <blink> of the '00s to the world. All of us involved in
Gnome graphics development agree on this.

2. Merely specifying ICC profiles is in fact inadequate for high-quality
printing. More details of this criticism are available in the essay
"What's wrong with the ICC profile format anyway?", by Graeme Gill:

http://web.access.net.au/argyll/icc_problems.html

I believe that a reasonable fix to the problems is to allow
specification of a gamut mapping algorithm in addition to simply
colorimetric data. Jan Morovic's PhD thesis contains a thorough study of
various gamut mapping algorithms:

http://ziggy.derby.ac.uk/~jan/gamut_mapping.html

If people don't understand what I'm talking about, please ask (and feel
free to use this mailing list rather than putting the question in the
next public draft).

Given that this issue is the subject of ongoing standardization work by
the CIE, the SVG working group may feel that it is not appropriate to
duplicate that work. However, I think it is at least worth tracking:

http://www.colour.org/tc8-03/

3. I hesitate on this criticism, but would like to bring the issue up.
If implementing a CSS2 system from scratch for use with SVG, I see quite
a number of aspects of CSS2 that are irrelevant or useless. For example,
I believe that the pseudo-classes could be safely omitted without loss
of functionality or convenience. In addition, the "+" linkage relating
siblings seems to me of very limited use in SVG, and its removal would
eliminate the need to track cross-sibling dependencies - in particular,
siblings could be reordered at will without changing the styling.

However, the reason that I hesitate is that adding these restrictions
would trade _implementation_ complexity for a small amount of
_specification_ complexity, in particular the need to carefully specify
the exact nature of this subsetting.

Thus, I raise the issue: a rather substantial savings in implementation
complexity is possible, if desired. Yet, I will not stand in the way of
a decision either way.

> [1] Daniels, Peter T; Bright, William "The Worlds Writing Systems",
> Oxford University Press, 1996. Hardback, 922pp, illustrated, includes
> index. ISBN 0-19-507993-0

I'll need to get this!

[2]
@article{Hobby89,
   author = {J. D. Hobby},
   title = {Rasterizing Curves of Constant Width},
   journal = {J. Assoc. Comp. Mach.},
   volume = {36},
   number = {2},
   pages = {209--229},
   year = {1989}
}

[3] This is a joke. However, the humor content is comparable to the idea
that Type1 charstrings actually have enforceable intellectual property
restrictions.
Received on Sunday, 15 August 1999 21:00:52 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:54:17 GMT