W3C home > Mailing lists > Public > public-svgopentype@w3.org > November 2011

RE: [OpenType] Update on color/animation in OT via SVG; new W3C Community Group

From: Levantovsky, Vladimir <Vladimir.Levantovsky@MonotypeImaging.com>
Date: Mon, 28 Nov 2011 15:37:00 -0500
To: Leonard Rosenthol <lrosenth@adobe.com>, Cameron McCormack <cam@mcc.id.au>, Sairus Patel <sppatel@adobe.com>
CC: "public-svgopentype@w3.org" <public-svgopentype@w3.org>
Message-ID: <7534F85A589E654EB1E44E5CFDC19E3D1186E29510@wob-email-01.agfamonotype.org>
On Wednesday, November 23, 2011 1:43 PM Leonard Rosenthol wrote:
> Vladimir wrote:
> >I agree. The rendering mode depends on the target media (e.g. screen
> vs.
> >paper), so the animation flag shouldnąt really be changed on a glyph
> by
> >glyph basis. We also need to allow SVG documents to specify different
> >content for animated / static glyphs.
> And are they separate glyph definitions in the same SVG OR separate SVG
> blocks in the OT?  The former would allow for <use> and other shared
> elements, plus not requiring duplication of data if not all glyphs
> animate.  However, the latter would make it easier on the OT engine and
> a
> clear differentiation of static vs. animated, so that additional
> restrictions (scripting, etc.) could be put on the static versions (if
> desired).

These are all good points, and I really didn't think much about the implementation details when I suggested having different content for static/animated glyphs. A while back, in the beginning of the SVG/OT discussion someone has suggested considering the first frame of an animated glyph as its static representation (or having a single content and, in case of animated glyph, using a snapshot of a specific point on a timeline for static representation), which I consider to be very limiting. 
As Cameron mentioned in one of his responses, defining separate content for static and animated glyphs could be as simple as having two children of the same <glyph> parent.

> >Besides the (x,y) positions for each glyph, we also need to make sure
> >that they are scaled appropriately, so the target size should also be
> >communicated for each glyph, I assume. And, if the coordinate spaces
> >where the OT glyphs and SVG elements are defined can differ, we need
> to
> >consider a mechanism to reconcile this.
> That interesting.  Are you thinking that you might have a "media query"
> on
> the element, so that it could adjust itself based on "target size"?  If
> not, what would be the use case for needing that info?  Wouldn't the
> engine just do the same thing it would do when drawing into any other
> viewbox?

I was talking about target size of a glyph (on screen, on paper), and the use case I had in mind was having an SVG document with glyph elements defined using design coordinates that are different from the glyphs defined elsewhere in OT font. E.g. I started with OT/CFF font and created an SVG document with some glyphs to go with it. Later, I want to convert my CFF+SVG font to have TTF outlines plus SVG glyphs - do I need to redo the SVG part? How a rendering engine would know what is the size the SVG glyph needs to be scaled down to, to match the rest of the text?

> >It is conceivable that the designers' intent might be to have animated
> >glyphs drown "all over the page" and this is okay if the end-result of
> >the animation occupies the place inside the bounding box of a glyph.
> >E.g., a bouncing ball animation where a glyph (like a smiley) is
> dropped
> >into its place.
> Given the design that Sairus has put forth, I don't see how that could
> be
> possible.  The SVG engine doesn't (necessarily) have access to the rest
> of
> the page/document - it's just drawing some "bits" into an area given it
> by
> the OT engine (which got it from the layout engine).  How would this
> work,
> for example, in a future version of Microsoft Word, Open Office or
> Adobe
> Reader?  The SVG engine wouldn't be able to access the content over
> which
> is would be drawing in order to do proper compositing.  For that
> matter,
> the layout engine might be doing this all offscreen or separate buffers
> (for performance or caching considerations).
> But clearly this is going to be something we'll need to discuss :).

If I am not mistaken, the only thing mentioned about the glyph positioning was (x,y) coordinates, so it wasn't clear how bounding box or a viewport for SVG glyph would be defined. I remember seeing some script fonts where swashes extend way beyond what would've been considered a typical glyph bounding box, and I can certainly imagine use cases when animated glyphs can be drawn outside of the bounding box (at least for a short period of time) so clipping to bounding box may not be desirable. I've seen quite a few animated emoticons where this is already the case (e.g. jumping smiley waving hands), and allowing something like this to happen (even if it's drawn over other glyphs for brief periods of time) isn't necessarily a bad thing, IMO. We should at least consider this as a possible design decision, which means it may not need to be a technical limitation.
And, as far as composition is concerned, SVG glyphs could be rendered into an animated alpha image of certain size the same way glyphs are rendered by a font engine - it would be the job of a layout engine to paint them on the target media.

Thank you,
Received on Monday, 28 November 2011 20:37:28 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:45:50 UTC