Suggestions for additions to the W3C Specification for SVG

Hi 
 
Sorry the embedded images are broken -- in the enclosed. I did this in rtf thinking that would be highly portable. It even worked when I mailed it from home to office. Well I can either send the images and truncate the mailing list (since home and office don't agree on who belongs to a contracted name, and attachements of rtf don't seem to work in Outlook exchange, and ... oh the problems just continue. So anyhow I attached the images and enclosed the text. I will just stick a proper version on the web when I have a chance and send the URL......
 
Let's see if the SVGIG can help with this discussion. Pay particular attention to the <superpath> since it does animation, page layout, mapping and graph theory all in one fell swoop.
 
respectfully,
David
Suggestions for additions to the W3C Specification for SVG

The SVG Working Group (WG) has demonstrated considerable willingness to collect input from the community about the sorts of features that the language should contain to best support the needs of its user community. Like its cousin, HTML, SVG has a most remarkably divergent set of user needs. Given its emphasis on material displayed on planar devices, and given the intrinsic importance of spatial relations in the presentation of content using a spatial metaphor, SVG's user community may be expected to grow to embrace and support more diversity even than HTML, which at its core is (with the exception of the <table> element and possibly other elements related to spatial arrangement) 1.5 dimensional: that is, its fundamental metaphor consists of written speech (text), with occasional embedded belches of multimedia (<object>, <img>, <audio>, <video>) plus graph theoretic cross-references that provide a modest foray into translinearity. SVG has every bit as much semantic and pragmatic reference to meaning as HTML, it is every bit as hypertextual, but its core metaphor for expression is the plane rather than the line. As such it has come to attract, already, artists, physical and social scientists, and mathematicians whose needs for expression transcend the ability to belch static frames generated elsewhere into an otherwise translinear stream of text. For this community, multidimensional space is not the occasional 2D painting on the wall of an otherwise 1.5 dimensional hypertext, rather for n>=2, n- dimensional space is home. 

In addition to being quite close to wrapping up one specification, the WG has been working hard on another and has announced both that it is focusing some of its efforts into extensions of the spec into focused areas of initiative (such as filters, gradients, transforms, accessibility, etc.) and that it is interested in engaging the community in broader dialogue about its needs. In response to community feedback it is already planning to consider extensions into non-affine transformations, richer sets of filter primitives, and improved control over layout and flow.

At the same time, it is important for the broader SVG community to respect the facts that a) standards development is hard work involving intense amounts of behind-the-scenes work with validation, test suites, conformance and other magical things I don't pretend to understand b) the WG already has a large agenda mandated by the standards process and c) there are not that many individuals in the WG to be able to do all the work already present. In part, this is why the SVG-IG has come into existence: to help with the workload (and hopefully, to shrink it at least as much as we add to it.)

The IG has made a helpful effort in that direction already with the decision to publish "the book" (representing maybe a person-year's worth of effort) and the WG has demonstrated a strong sense of cooperation as well. So, without appearing to dump a giant chunk of work onto the WG, I'd like to suggest that we have some discussions about new classes of effects for the SVG spec, under a couple of provisos: 1. let's realize that these are not responses to a candidate or current draft and 2) spec writers usually prefer it when users structure the discussion of needs in a vocabulary and syntax that spec writers themselves use: use cases, disambiguation, validation, ontologies, and so forth. That is, if we can make our needs clear, document how or why the current spec doesn't do something, provide multiple situations (use cases) in which the feature could be useful (to the arts or sciences -- while recognizing that usefulness can mean very different things to different communities) , and perhaps, even suggest how the syntax of the feature might be expressed, then in such a case, the IG might reach consensus that a given feature be recommended to the WG for inclusion in "a future spec." The reason for saying "a future spec" is that, as I understand it, the W3C requires rather extensive amounts of work from a WG depending on the stage at which a given current spec is in its progression from draft to recommendation. I don't believe it would be in anyone's interest to derail any progress currently underway. We like SVG!

Some Suggestions:

I will discuss the following: the <contour>, an object consisting of concentric subpaths (closed curves) that instructs the gadget-browser to interpolate those subpaths into something resembling a path filled with a gradient that is considerably more flexible than the radial or linear gradients, applied as effects.

A syntactically similar notion is the <superpath>. Like the <contour> it has <path> elements within it, but those path elements are never closed by z in their "d" attribute and represent the different borders that a given region might share with its neighbors (as in a geopolitical map). It is likely to be used both for map processing, graph theoretic purposes (like diagramming), animation, and layout (that is more flexible than the HTML table or its CSS counterparts since it affiliates regions, connections and boundaries of arbitrary partitions of the plane).

Also included (with increasingly less clarity in my mind as to exactly how they would work are the doodle, the editable attribute for <path>s and for certain <animate>s, the editable <sound> and the fractal (is it a basic shape or is it a filter?). 

Speaking of filters, I believe the universe of possible filters (like the class of permutations of strings) is a large set. While, we might observe that a small number of primitives may suffice to generate all expressions in that expressive realm I am not clear how best to optimize the size of that set, and to make matters worse, I think the boundary between filters and transforms occasionally becomes blurry -- perhaps because of Photoshop's inclusion of "distort" options in the fillter menu while simpler transforms live in the edit menu. Given the already rather high learning curve (so It would seem based on the expertise level of the audience that might show up for a workshop on filters) associated with filters, I think we should exercise caution about expanding that set too quickly. I am reminded of how quickly Kai's Power Tools became accessible as Photoshop plugins In the 1980's so we may wish to consider extensible grammars in this area. Nevertheless, I have three things to say on this topic: spherize and pinch (as either transforms or filters) are quite handy (owing to their relationship to common lenses) and a mechanism for weaving pictures together as in http://srufaculty.sru.edu/david.dailey/javascript/weave/weaframe2.html could prove useful both in artistic and cryptological communities. Last and foremost it'd be nice to have a filter to convert bitmaps to paths like apparently happens in the diffusion curves video at http://www.youtube.com/watch . The fellow from Los Alamos at SVG Open 2007 was doing pretty cool stuff with SVG and realtime data processing of scenes -- his algorithms were quite fast though USDOE has gotten rather stuck with cost recovery lately so I don't know if the stuff is still covered by the "works of the US govt / copyright" proviso. If nothing else the stuff that Adobe was doing in its early Illustrator (88) and the stuff in Inkscape (which is pretty darn sophisticated) and NIH image would give us an ample start in that direction.

The <doodle> is a somewhat declarative alternative to the <path>. Rather than drawing it by enumerating coordinates and transitions as in its simpler cousin <path>, one enumerates coordinates and transitions,but also extrapolations, iterations and simple enumerated, conditional and recursive operations.

It and the others will be discussed in more detail now.

A. The new grouping elements: <contour> and <superpath>

Basically, the contour and the superpath are like groups <g>, but all the visible elements inside are <paths>. Each gives explicit instructions to how those objects inside are combined, extrapolated, animated on the screen and how they are processed through certain JavaScript (or other language) calls. (I know the Spec refers to ECMAscript rather than JavaScript but ours is, after all, not a standards group and it is probably best to keep a sense of informality, and even, occasional humor to our discussions.)

B. The <contour>

As mentioned earlier, <contour> is an object consisting typically of concentric subpaths (closed curves). It interpolates those subpaths into something resembling a single path filled with a gradient that is considerably more flexible than the radial or linear gradients, applied as effects. It is like a contour map, but since the border (stroke) is not required, it resembles, when the curves are concentric, a shape gradient that may have multiple shapes inside. I had started writing this up before I had a chance to look at the stuff that we heard about concerning Diffusion curves, so I don't know how the two ideas compare. This idea is straightforward to implement and would rely on some code the browser implementers have probably already built (like following movement along a path). So with that in mind, here goes:

The paths within < contour > are constrained as follows:

Their fill must be either a simple color or a gradient (no patterns or filters). If two fills of type gradient are adjacent (in markup) then both must be of the same type (radial or linear). If two gradients of different type are adjacent then one must be of the null type (e.g. a simple fill=color) so as to allow a nonambiguous interpolation.

Typical syntax:

		< contour type=null|gradient|animate path="url(#q)" method=uniform|equidistant steps=positive-integer>

				<path/>

				<path />

				.

				.

				.

				<path/> 

		</ contour >

		 

The way it works is this;

In the default case, we build a shape by interpolating between two other shapes as in Illustrator (circa 1988) when you'd tween between two bezier curves. The tweening lays down, in sequence a series (of size "steps") of new path elements moving from the background to the foreground. Those path elements are rendered on screen, and inserted into the DOM., so that the <contour> expands the DOM declaratively. Inkscape provides this functionality under "effects" / "generate from path" which one can experiment with easily.

By default the interpolation moves the bounding box of the first to the bounding box of the last (with possible intermediate steps given by the path elements between first and last) with linear interpolation. If a path attribute is provided then the position of the center of the bounding box of the hypothetical curves will follow that curve. If the curve provided does not coincide with the bBox centers of paths that lie between the first and last, then the locus of those intermediate paths is overwritten by the path of the provided url. [I'm not sure I wrote this correctly, but I'm thinking of something much like the way SMIL animation follows a curve]

If the type specified is "gradient" then the <contour> itself may be used to shade any other simple drawn object in much the way that a radial gradient whose centroid and fx cause out-of-bounds regions. In such case, the <contour> is not rendered, and instead acts like it is part of a <defs> tag.

If the type specified is "animate" then the intermediate frames will not be added to the DOM nor accumulated on screen as rendered, but rather will be "moved" from the first position to the last following the "path" and interpolating shape and fill from beginnng to end (with the each specified path acting as a sort of intermediate keystop).

Use cases:

1. creating simple 3D-looking objects with shading

2. allowing a richer class of gradients than just linear and radial (useful for drawing)

3. allowing a richer class of declarative animation in which two shapes and a path determine the framewise transitions as well as the 2-D path.

C. The <superpath>

This is used to either facilitate animation of separate parts of an object, to easily define a class of part whole relations, or to define a sequence of boundaries that define regions, as in a map. It creates a superpath that is the union of a series of subpaths, in which those subpaths may be shared between multiple ("contiguous") superpaths.

Typical syntax:

<superpath type=map|graph fill=color|gradient|pattern|filter|svg>

		<path/><path/>

		<path /><node id=quotedstring /><path />

		.

		.

		.

		<path/><path/>

		< node /> 

</superpath>

1. they must not have a "z" path command at the end of the "d" attribute, 

2. The <node> objects are for reference to the vertex shared by any two consecutive paths within the superpath.

3. by default, a superpath is of type "map"

4. question is there any value to nesting superpaths or does the group element suffice?

5. a <node> at the end of the sequence of subpaths within

Superpaths allow for condensation of the data needed to specify a planar map or for the planar embeddings of graphs (whether intrinsically planar or not).

Consider the following diagram that shows three planar regions drawn next to each other. At one level of zoom and data precision. the regions appear to abut one anotherr properly, but at another level we see that the imprecision (due to level of accuracy) has sundered the images from one another.

 <https://rockmail.sru.edu/Exchange/david.dailey/Drafts/RE:%20SVG%20IG%20at%20SVG%20Open.EML/Image7.gif> 

image shows that the regions look tightly connected at low zoom, but gaps appear at high zoom

The superpath works by breaking each region into a series of subpaths -- each describing the border where precisely two regions meet. A subpath is either a <path> or a <node>: a path containing only one x-y coordinate.

Let's begin with an example:

 <https://rockmail.sru.edu/Exchange/david.dailey/Drafts/RE:%20SVG%20IG%20at%20SVG%20Open.EML/Image8.gif> 

four regions connected as in a map. 

The above map could be described as

<g>

		<superpath id="OneYellow" color="mustard">

		<path id="A" d=[description of path from node3 to node2] />

		<path id="B" d=[ description of path from node2 to node1] />

		<path id="E" d=[ description of path from node1 to node3] />

		</superpath>

		<superpath id="OneBlue" color="blue">

		<path id="C" d=[description of path from node4 to node2] />

		<path id="B" d=[description of path from node2 to node1] />

		<path id="D" d=[description of path from node1 to node4] />

		</superpath>

		<superpath id="OneGreen" color="lightgreen">

		<path id="A" d=[description of path from node3 to node2] />

		<path id="C" d=[description of path from node2 to node4] />

		<path id="F" d=[description of path from node4 to node3] />

		</superpath>

</g>

Note: any fills associated with paths that are part of a subpath are ignored, though their strokes might not be (since different borders between regions may require differing strokes. If a superpath region defines a stroke, then all subpaths inherit that stroke unless they have their own stroke defined.

Given two consecutively defined paths within a superpath, It is generally assumed that the last point of one, A(n) will be the first point of the next, B(1). If not there are two options: either a line segment is added connecting A(n) to B(1) prior to rendering, or a <point> element is inserted between the two paths to specify the nature of the transition between the subpaths it connects:

		<superpath id="OneGreen" color="lightgreen">

		<path id="A" d="100,100 200,200 />

		<node x="210" y="210" join="smooth"/>

		<path id="A" d="200,220 100,300 />

		</superpath>

The above would instruct the browser to find a "smoothest" transition between the two paths - one which matches the slope of the two paths at the endpoints being joined. This involves definition of the <node> element, which for now is rather vague, but its purpose is to allow styles to be assigned to the juncture of two path segments, to give simplified control over animations of <superpaths> by animating the locus of such <node>s and to allow the identification of graph theoretic "nodes" when the superpath is of type ="graph"

When type="graph" the concentration is on the linking structure provided between nodes or points. In the above diagram, the graph defined, would be the geometric dual of the map, so that this map (informally used to mean a group of superpaths) would in fact define the graph K4 (4 nodes all connected pairwise).

When we fill a superpath not with a color or fill, but rather with svg, what I mean by that is that we flow a given group into the shape defined by that region. This allows a very flexible class of geometric layouts in the plane -- far more flexible and fine-grained I think that html <table> and far more user friendly than CSS when it comes to actually building something (empirical hypothesis demanding human factors data!)

Use cases/ rationale:

1. Allowing the author and user to zoom in on borders of regions of maps without having to specify huge levels of precision when the we zoom to close levels of details and still have regions borders zoom accurately.

2. Allowing the data reduction afforded by not having to redundantly ship entire descriptors of regions which share common borders

3. Enabling methods which might deform one border of two shapes without having to redraw the entire map.

4. Enabling ease of animation by providing markup-based parsing of a drawn object into semantically relevant subobjects and maintaining the relational structure of those subobjects. We may, for example, by breaking a paths d that has 1000 elements into 100 subpaths of 10 control points apiece, we may animate just the "arm" subpath of a cartoon character, without having to interpolate between two 1000 point paths. This will likely make life easier for hand-coders and authoring tools alike.

5. Providing a lightweight format for the specification of binary relational data: graphs. The stipulation of the <node> object should be fleshed out but one can consider that by defaullt an entire group of superpaths would inherit

6. Providing a flexible method for stating the spatial relations between collections of SVG objects.

7. Providing scripting methods appropriate to the support of the above use cases. 

Example A=getElementsById("Q") ///where Q is a group of superpaths

///can be followed by

Graph=A.toGraph() 

///for the above example which would return a JavaScript object

Graph = [

		Node[0]=[1,2,3],

		Node[1]=[0,2,3],

		Node[2]=[0,1,3],

		Node[3]=[0,1,2]

]

That is, an incidence array for the nodes of the graph (which is sufficient for all relational data). If the geometry of the paths is relevant then

Graph=A.toGraph("lines") might return an object consisting of an array of node incidences together with an array of pointers to the actual subpath objects in DOM.

The <doodle>

My thought here is based on the simple observation that it is too much work to have to actually write or draw all the coordinates in the d attribute of a path. Cant' we do that declaratively by saying something like this:

Walk south for three blocks, turn 20 degrees, repeat this operation 12 times, but reducing the scale each time by 20% as we go. This generates a spiral.

I remember writing a paper in grad school about the syntax of such declarative drawing languages (there were a couple at the time -- maybe Herb Simon or a colleague and later Seymore Papert defined one as I recall in Logo). My recollection of what I concluded is that isolation of the primitives was subjective and that no objective work on which of several distinct axiomatizations actually worked best for humans had been done. The progress in empirical work in such heavily theoretic areas has I suspect been largely neglected by researchers who would rather build things than study humans. However there are a class of smooth planar four regular graphs that as neural networks are sufficient to drive computation in two space (with only a trit of memory per processor) and the drawing of those graphs would be awfully nice if it could be done somewhat declaratively.

I note that Inkscape's "effects" menu has a lot of the sorts of things I may be thinking of here. A simple microformat (is that the right word?) could be developd that would be easy to implement and would expand the expressive power of mathematicians, artists and animators alike.

<doodle dd="(((M 100 100 radian 330 12) scale .5) iterate 12) (((last duplicate) translate 30) rotate 20 ) iterate 5 >

Basically, it'd be sortof like the use of use in http://srufaculty.sru.edu/david.dailey/svg/ovaling.svg but defined on parts of curves rather than objects.

The primitive operations should include rotate, translate, scale, but also reflect (invert the markup point sequence description from left to right), fragment (shake or permute a given sequence), randomize, smooth and fractalize (to a given degree).

Editable graphic objects:

Many folks have called for SVG to adopt editable text objects (like textareas). SVG 1.2 full has those, and Opera has already implemented editable text in SVG. The flowing text shapes are coming as well. Basically we want users to have a way to enter and modify strings In SVG.

Well it just makes sense since it is a graphic language that we should allow run time editing of shapes as well.

Any shape marked as editable, should be for the beginning selectable, repositionable, scalable, rotatable and annotatable. The interface for doing this is pretty common from application to applications these days -- Inkscape and some other open software projects are useable and probably available for adoption.

Click on an object -- its boundary becomes highlighted and handles appear. While selected if the mouse is over the object the second click signals the beginning a drag. A click on a handle signals the beginning of a resize, a click outside all of these (but within some radius) signals a rotation. While selected, the delete key remove from DOM.

Just as text folks want to be able to edit text without lots of script, graphics folks probably want to be able to edit graphics without a lot of script. This is SVG, of course.

Editable sound objects:

Ditto all the above arguments for sound. It'd be relatively easy to build in a little SVG sound studio for all embedded sounds, so why not we do it?

Suggestion concerning the status of such speculation

Since the ordinal number of the specs currently under various states of juvenescense are all (to my knowledge) less than 1.3, how about we entitle (in an informal way, appropriate perhaps to an interest group that is not a working group) all such speculative specifications "speculafications" and let us adopt (again in the most informal of ways) an ordinal number for the union of all current speculafications to be equal to the next integer above the current Spec minus an infinitesimal. In this case since the current Spec number is greater than 1 but less than 2, then all such speculations the IG discusses would be labeled SVG1.9999... ** At some point in time we might ask the SVG WG to recommend to some committee on committees (a WGWG?) that the official Spec never be numbered with an integer minus an infinitesimal, since that entire countable set of individually infinite strings would be reserved for speculifications.

(Doug, can I have an action item # for resolving the discussion of whether our group's discussion of specs should have a number?)

I conclude with some of my own examples at the end of my recent paper. Some of these I've talked about already (but some I did not):

				1. Some types of gradients that are more flexible than just linear and radial would really be handy. The contour gradients of many applications that parallel (or run perpendicular to) a bounding shape might help (certainly for the task of stitching things together into larger smooth textures.) It might be nice to take n user defined points in the plane and to define from that a set of contour lines which would run like altitude lines, around those points providing arbitrary gradient maps over a region.

				2. Transforms of the non-affine sort would certainly be useful. While it is encouraging that one can simulate these, a proper simulation would require too many small regions to be practical. Cylindrical and spherical warps obviously come to mind, but any extension of the conic warp attempted in the section on clipPaths would be useful. 

				3. Some types of random noise other than what is provided through feTurbulence could be useful in allowing the class of natural textures that we can construct to expand. 

				4. Additional filters (such as one can purchase as add-ons to Adobe Photoshop for example) would be handy. The convolution filters allowed are flexible, but for the average user, things like "plasticize" or "chromatize" might be more straightforward. 

				5. Additional 3-D constructs could be added. In addition to non-affine transforms, which some might view as special cases of 3-D constructs, there is currently no compelling standard for 3-D work on the web. Adding a bit of this into SVG might serve to fill a void without stepping on any exposed toes.

				6. Extension of the declarative animation model to include occasional imperative constructs: for example, might we not benefit from having the ability to define random durations or random x-y loci in our declarative markup without having to rely upon script? Likewise, a construct that allows us to specify that a certain object might move in such and such a direction until it encounters an edge or another object, might be quite a powerful extension to the quality of declarative code within SVG.

				7. In conjunction with the posterization of bitmaps, the ability to read color values at given pixel locations back into script is quite important. Likewise, the ability to use a "file upload" capability to bring images from user-drive-space into an SVG application is crucial. It appears that the HTML WG (at least WHATWG) has decided this is to become verboten (perhaps it causes too much work for browser makers to worry about security problems) but in the 15 years that Internet Explorer has allowed it in HTML it seems not to have crashed the Internet. Perhaps SVG could enable this in some way, since the HTML bus seems intent on squashing this ability?

				8. Non rectangular <pattern> spaces. It's clear that most of this can be simulated through rectangular ones. And it is also quite likely that nondeterministic tilings will require script rather than markup for quite some time to come. But SVG could enable the non-programmer to produce interesting visual effects by simply enabling the choice of any of the uniform tilings. Some of these might actually prove to be practical since the hexagonal tiling for example is often used in simulation of battlefield scenarios since distance in that graph is slightly more similar to Euclidean than is distance in square grids.

				9. And of course, we have the simple request that the offset of a text should be expressible as a negative number -- it seems rather inconsequential in contrast to some of the rest of this wish list.

I conclude with some comments from my book that I see do not entirely overlap with the above. How delightful that everytime I think about what SVG needs I come up with something else. That means I don't have to be too concerned if none of this happens in any immediate time frame.

So although I am grateful for the bitmap capabilities we do have in SVG, they are nevertheless, a source of some of what I would like to see added. In truth, my wish list of features to see in SVG may overlap with some of what SVG 1.2 has to offer.

		1. Knowing the color of a particular pixel on the screen. If we could do that then we could not only apply convolution filters to bitmaps to display high contrast edges, but then we could convert the edges back to vectors. I understand that there are potential security issues associated with giving scripts the ability to take pictures of the user's screen, but Opera 9.0 seems to have implemented a getPixel and setPixel method that allows interrogating and setting pixel values for arbitrary pixels within a <canvas> object in the HTML environment. If we wished to enable desktop image analysis to be done in the browser, then access to pixels will be pretty important.

		2. The ability to deform an arbitrary quadrilateral into another arbitrary quadrilateral (as with the distort tool in Adobe Photoshop(tm)). The affine transformations allowed within SVG 1.1 and 1.2 allow scaling, translation, rotation and skew, which together offer mappings between arbitrary pairs of parallelograms. If we wished to be able to do the morphing of one bitmapped image into another, quadrilateral distortion would seem to be handy. To build something like the "liquefy" filter in Photoshop would likely rely on mesh-based distortions that fall outside the affine transformations currently recommended.

		3. The ability to slice a bitmap into subregions where each subregion is not merely the clip of the entirety If we wish to make large scale jigsaw puzzles, and then scramble the resulting pieces, then numerous copies of the bitmap would likely need to be created using either <clipPath> or <mask>. Either is highly RAM intensive and would require, enormous allocations of memory to perform a fine-grained relocation of small chunks of imagery.

		4. Gradients of varieties other than just linear and radial To create smooth transitions of fill patterns across a variety of shapes or with more than two colors is likely to involve a more complex specification for gradients (such as mesh gradients). To piece together segments of linear and radial gradients to approximate omni-directional gradients has been compared by some to making smooth curves out of line segments - possible, but tedious.

		

Received on Wednesday, 3 September 2008 01:36:40 UTC