W3C home > Mailing lists > Public > public-fx@w3.org > January to March 2013

Re: quick comments on Matrix

From: Dirk Schulze <dschulze@adobe.com>
Date: Fri, 15 Mar 2013 04:36:01 -0700
To: "robert@ocallahan.org" <robert@ocallahan.org>
CC: "public-fx@w3.org" <public-fx@w3.org>
Message-ID: <5CD07819-C41B-4B71-8DB2-B23E2DCC7CE9@adobe.com>

On Mar 14, 2013, at 10:31 PM, Robert O'Callahan <robert@ocallahan.org> wrote:

> On Fri, Mar 15, 2013 at 5:51 PM, Dirk Schulze <dschulze@adobe.com> wrote:
>> > In rotateFromVector(By), what happens if the x or y coordinate is zero? Actually it's not clear to me why these can't be zero.
>> 
>> This was taken over from SVG. The calculation is "(+/-) atan(y/x)". While y could be zero, x can't. I assume the editors of SVG wanted to have consistency between the values.
> 
> The case where x is 0 makes geometric sense though. The only case that doesn't make sense is when both x *and* y are zero. I think SVG got this wrong.

I guess what you suggest is that (x,y)^T = (0,1)^T is a rotation by 90 degree and (1,0)^T is a rotation by 0 degree? (Note that the rotation is clockwise on SVG.) Maybe it is better to just specify the angle between (1,0)^T and the vector given by x and y. The only special case would be that both values are 0 (which could be defined to be a multiplication by the identity transform == no affect).

> 
>> > What's the rationale for using double for matrix elements? Are implementations allowed to use 32-bit precision internally?
>> 
>> WebIDL requires a datatype for attributes. In this case there was the choice between float (32-bit precision) and double (64-bit precision). The later seems to make more sense - especially because of Float64Array.
> 
> I think it's worth considering whether we need to specify that the implementation uses double precision. On one hand, specifying the precision would help interop. On the other hand, requiring every Matrix to be backed by a 4x4 matrix of float64s might have a real performance penalty vs float32s. 

There were similar discussions on webkit-dev mailing list[1]. Usually 32bit maps better to the underlaying GPU. The difference is less important on desktop CPUs. With standards like OpenCL 1.1 full and others, GPUs are required to get better on double precision anyway. The decision seems to be more if we want to optimize for current hardware or look at the life time of the spec and assume that HW gets better. Changing the spec later causes incompatibility later (even if the difference is very small).
WebIDL does not support optional datatypes it seems. That means I needed to choose between 64-bit and 32-bit. Upon on requests here on this list, it seems that there are use cases for double precision beyond graphics for the screen. 

Greetings,
Dirk

[1] https://lists.webkit.org/pipermail/webkit-dev/2012-May/020764.html

> 
> Rob
> -- 
> Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe fynir — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]
Received on Friday, 15 March 2013 11:36:32 GMT

This archive was generated by hypermail 2.3.1 : Friday, 15 March 2013 11:36:32 GMT