Re: SVG in OpenType proposal

On 2/3/13 6:29 AM, "Cameron McCormack" <cam@mcc.id.au> wrote:
>For animation, I continue to disagree that a separate animated glyph
>definition is required.

It's required for when a font author wishes to provide a static colored
AND an animated version of the same glyph.  For example, emoji.  For cases
where you have an authoring and/or rendering environment that supports
color but not animation (eg. MSWord or Pages), the authoring applications
needs to be able to choose one over the other.


>Our proposal states that when glyphs are
>rendered in situations where animation is not possible, then the SVG
>animation elements just do not apply.  This is the same behaviour as if
>you took an animated SVG document and opened it in an SVG user agent
>that does not support animation (such as Internet Explorer).  It is
>simple enough to construct your content such that the static view is
>what you would see if the animation elements were not present.

That all assumes that you control the SVG renderer, but in most cases,
that's not the case.

Consider an environment (say iOS) where an application has access to a
WebKit-based SVG renderer, but no control over whether it should run SVG
animations or not.  An application such as Apple's Pages that wants to
display color glyphs but not animated glyphs, will not be able to do under
your proposal as it won't know which glyphs to load/use.

Now to continue this example out to the OS levelŠ Pages doesn't know
anything about glyphs, nor should it.  Instead, it simply asks iOS to
render a run of text.  iOS will now need to provide APIs that allow Pages
(etc.) whether it should choose SVG-based glyphs at all, static only or
allow for animation.  And the only way for iOS to be able to do that is if
it can quickly (aka w/o parsing the SVG) pick the right glyph from the
font.



>I will need to read that thread about bounding boxes.

The issue here is related to the fact that if the SVG is being rendered by
a process separate from the rest of the page content - then (a) it doesn't
own the drawing area so it can't draw where it wants and (b) it has no
information about what it is drawing on top of to be able to properly
composite as it animates.

Return to the example above.  How would the WebKit renderer in iOS be able
to animate a glyph on top of the Pages' page??  It can't (realistically).
However, there would be no problem with it animating inside of the glyph's
BBOX - since that area is "pre-defined" for that glyph.  (granted, there
are still issues, but at least they are much more constrained and
solvable).


>>Regarding inheritance, your proposal introduces the "context-XXXX"
>> attribute values but in doing so (and in the examples provided) assumes
>> that the glyphs are being rendered inside of a web-context. No
>> consideration is given for a context that has completely different
>>and/or
>> incompatible attributes. This is why the Adobe proposal specifically
>> leaves these for a future specification during which time the complex
>> issues can be evaluated and (hopefully) resolved.
>
>I think it is reasonable to worry about what the "context text object"
>means for non-Web content and what it would mean for the context fill
>styles from that non-Web context to be used in the glyph.

If we can come up with some appropriate definition and/or way to handle
this case - then I would be OK with having such a concept as part of the
SVG font definition.


>We could add some wording explaining how non-SVG colours and patterns
>from the context would be handled when rendering the glyph.  We would
>need to have a more abstract definition of the paints that can come in
>from the context -- let's say, a solid colour of a type that corresponds
>to one of the kinds of colours you can specify in SVG 2 -- or a fixed
>size pattern/bitmap, which would be handled just like an SVG pattern.
>
>Let me know if there are specific problems that a high level description
>like that would not solve.

I think there are three ways to handle this.

1 - Require the caller to provide SVG-compatible "paints" for the current
graphic state to the font renderer.
2 - Have the font provide a non-context-aware version of the glyph (ala
the animation/non-animation) that can be selected.
3 - Have the font identify context-aware glyphs so that they can be
avoided if not supported. (again, ala animation).


2 and 3 are pretty easy to implement (in a variety of ways) and could be
done in a way that is compatible with the model we choose for animated vs.
non-animated (be it the Adobe model or some other choice).  Personally,
this would seem like the best approach to me.

1, while it might sound like the right approach, simply won't work
reasonably in most non-web contexts - even if the color model matches SVG.
 Go back to our Pages & iOS example.  You have two issues here.  First,
iOS would need to add an API that would let the caller provide the current
fill, stroke, etc. colors - and Pages would have to call this before every
single "draw text" (in the most simplistic implementation).  Second, iOS
would either need to provide a "map color to SVG" method for the client to
call before the "set glyph colors" call or it would have to do it itself.
Either way, it's a lot of work.


>>So the intention is that all of the standard ways that you would specify
>metrics for OpenType glyphs would also apply to the SVG glyphs.  So the
>OpenType outline glyph for a particular glyph ID would have the same
>advance as the SVG glyph, etc.

Great - glad we are on the same page on this one!


Leonard

Received on Sunday, 3 February 2013 15:27:13 UTC