Re: [css-transforms] CSS3D breaks with opacity flattening

Dear those-who-can-change-the-world-from-one-day-to-the-next,

The spec says

The transform-style property allows 3D-transformed elements and their
> 3D-transformed descendants to share a common three-dimensional space,
> allowing the construction of hierarchies of three-dimensional objects.


​but the new `opacity` behavior separates objects into 3D spaces which are
no longer common! It is very unintuitive behavior.

I much prefer the *screaming skull* effect over the *paper skull* effect
any day!

*Violation of CSS Principles*

As the end user of the today's web technology (specifically an author
building a library on top of CSS3D), I humbly believe that ​this change is
reduces the utility of CSS-based 3D programming in the web whenever opacity
is desired. The change removes the ability for 3D content with visible
non-leaf nodes to have CSS opacity applied to non-leaf nodes without
causing the non-leaf nodes' content to become flat like paper, which limits
the use of opacity strictly to leaf-nodes only (as far as CSS 3D is
concerned). Amelia calls this the "mixed content" problem.

*This goes the against one of the main principles of CSS: that we should be
able to apply styling to DOM **without modifying the DOM structure that we
wish to style. *This alone should be enough evidence for you all consider
not shipping (or in the case of Chrome reverting) the changes relating to
the latest css-transforms spec

We should improve the spec so that we can achieve opacity like that which
we have in the "legacy" implementations, but in a concise and and
interoperable way.

Matt, Tien, Simon, Rik, Tab, Chris, Philip, can you please not ship the
changes in Firefox and Safari and revert the changes made in Chrome 53
relating to css-transforms, and hold off until we can spec a valid 3D form
of opacity that is both intuitive and interoperable across browsers?

*The Problem*

The problem is simple to explain, and the proposed solution that various of
you have proposed in this list and on GitHub
<https://github.com/aprender/ReFamous/issues/11#issuecomment-249382869> (which
is to apply opacity only to leaf nodes of a 3D context) does not work with
3D content that resides on non-leaf nodes. The workarounds are painful,
increase memory and CPU consumption in the case of my library, and make
code more complex in a bad way (i.e. make the code hacky and ugly).

Let me re-iterate with examples what I mean and then explain the problems
that my library faces with the new behavior. Please note, these examples
are only tested in Chrome 53 (with the new behavior) and Firefox (with the
"legacy" behavior), and the examples are broken in Safari (for now).

An end user of my library can write simple HTML/CSS/JS like the following
(this is my actual library in action) to define 3D scenes:
https://jsfiddle.net/trusktr/ymonmo70/15 (aside from Firefox's glitchy
flickering, it renders the same in both browsers).

The `motor-scene` and `motor-node` elements in that example compose what is
the HTML API that my library exposes; the motor-scene and motor-node
elements are the declarative side of my library's API (we'll ignore the
imperative side of the API for the purposes of this message).

For the sake of my argument, let's color the motor-node element that has
the class "car" with a red border:
https://jsfiddle.net/trusktr/ymonmo70/9

Aside from the flickering, Firefox incorrectly paints the red border behind
the car at certain angles. Chrome gets it perfect.

The red box is the root of a tree inside of a 3D scene; it represents a
valid 3D object inside a 3D context, and it shows an example of a non-leaf
element that is visibly rendered (the red border); i.e. mixed content.

Now, let's make the *box AND it's content* transparent by applying opacity
to the `.car` (a completely valid thing to want to do!). There are two ways
to do it. One way is using the `opacity=""` attribute of my custom element,
and the other way is using the vanilla CSS opacity property. My
`opacity=""` attribute simply passes the opacity value into the style
attribute of the same element (you can inspect element to see what it
does): https://jsfiddle.net/trusktr/ymonmo70/17

Chrome 53 flattens the car onto the red box plane. Firefox has the correct
behavior aside from the previous glitches I mentioned.

You now see that the `motor-node` with class "car" is transparent as well
as its descendants, but in Chrome the car is flattened onto the red box's
plane, which is completely unintuitive from a 3D point of view.

*It is also important to note, I can already achieve the flattened effect
in the current "legacy" implementation if I really want that. Here is an
example of a scene inside a scene (a 3D context inside a 3D context), and I
can make the inner scene have opacity less than one (it is already
flattened because it is already a new 3D context, you can inspect element
to verify): https://jsfiddle.net/trusktr/ymonmo70/18
<https://jsfiddle.net/trusktr/ymonmo70/18>*

What you see is the inner scene inside of the outer scene's car's
windshield, and you see it is transparent. *Basically, I can already make a
new context myself and apply opacity to it. I don't really need this new
second way of doing the same thing.*

I can set the `transform-style` property to `flat` if I really want to. Why
should it be automatic? That is frustrating.

*How My Library is Impacted*

If you inspect element in any of these examples, you will see the `
motor-node` elements in the DOM, and you will see the attributes that they
have (position, rotation, etc). For each motor-node, my library takes those
attributes, creates a transform matrix (using my polyfilled DOMMatrix
<https://github.com/trusktr/geometry-interfaces> class), then applies the
matrix values to the motor-node's CSS transform via the `style` attribute.
Modifying the position/rotation/etc attributes causes the style attribute
to be updated on the same motor-node element.

In essence, the motor-node elements that the end-user defines are the same
elements onto which the CSS transforms are applied to.

When the `opacity=""` attribute is assigned onto a `motor-node` element,
the attribute's value is simply propagated into the `style` attribute of
that same `motor-node` as CSS opacity. And, as you can see, everything is
flattened starting in Chrome 53.

To solve the problem, we have to un-nest the nested elements in order to
make them leaf nodes. Don't you see how this is a big problem?

To solve the problem in my library while still allowing my end users to
write the same exact HTML markup using `motor-scene` and `motor-node`
elements, my library has to do the following: instead of applying
transforms and opacity to those same `motor-scene` and `motor-node`
elements, the library will instead make those `motor-*` elements have
`display:none` styling, then my library will need to construct a scene
graph next to the `motor-scene` element using the non-nested approach where
all the elements in the 3D context are siblings and never nested.

If you inspect the elements in my examples, you see there are currently
only `motor-*` elements, and you'll see they have `style` attributes that
update while the car is rotating.

If I implement the non-nested solution that I just described, then what
you'll see when you inspect element are those `motor-*` elements, but then
you will also see (as sibling to the motor-scene element) a new set of DOM
elements: a div element that contains the 3D context, and inside of it a
bunch of non-nested div elements which are the things that will be visibly
rendered.

For example, if I implement the non-nested rendering approach, then we'll
see something like this in the element inspector:

```html
<!-- This is the original tree that the end library user writes. -->
<motor-scene style="display:none">
  <motor-node position="..." rotation="...">
    <motor-node position="..." rotation="...">
      content
    </motor-node>
  </motor-node>
</motor-scene>

<!-- This is the new non-nested tree that is the visible output. -->
<div class="motor-scene-output">
  <div style="transform: matrix3d(...)"></div>
  <div style="transform: matrix3d(...)">
    content
  </div>
</div>
```


This solution (creating a second tree that is the visible output and hiding
the original tree) is very ugly. Let me explain why.

The user who writes a 3D scene using `motor-scene` and `motor-node`
elements will expect to see any content inside of the `motor-node` elements
to be rendered in the visible output, which means the that the content
needs to exist in the new separate DOM tree, so we need to either:

   1. Move the content over from the motor-scene tree into the visibly
   rendered non-nested tree, or
   2. clone the content over, in which case the original content remains in
   the `motor-scene` tree while cloned content will be placed into the new
   visibly rendered non-nested tree.

In the last example, the `content` text node had to be cloned from the
non-rendered motor-scene tree into the visibly rendered tree. Take a second
to imagine the problem associated with this (there are now two instances of
that content, and CSS selection will not work the same).

*Performance Implications and Code Complexity*

Basically, in solving the opacity problem by rendering a second visible
tree and making the original markup `display:none`, the following critical
and painful issues will now appear in my library:

   1. There are now two trees, which means *double the memory footprint in
   the DOM*.
   2. Transforms have to be multiplied down the motor-scene tree manually
   now, instead of the HTML/CSS engine doing it natively, which is *extra
   CPU cost and added code complexity*.
   3. Opacity has to be multiplied manually down the motor-scene tree
   manually now, instead of the HTML/CSS engine doing it natively,
which is *extra
   CPU cost and added code complexity*.
   4. With my library's current nested approach, changing a transform or
   opacity on a motor-node element means modifying the `style` attribute of
   that *single* motor-node element. With the non-nested approach whereby
   we create a second non-nested tree next to the original motor-scene tree,
   changing the transform or opacity of a motor-node element means we have to
   find the motor-node's reciprocal element in the second non-nested tree and
   apply transforms and opacity via the `style` attributes to all of the
   descendants of that element. I.e. In the nested approach, changing a
   transform or opacity of a single element means we need to perform a single
   number-to-string-to-number conversion in order to pass numerical values
   into the CSS engine via the style attribute; *this results in a single
   conversion even if the target element has 1000 descendants*. In the
   non-nested approach, if we change the transform or opacity of an item in
   the scene graph, we must now apply transforms and opacity via the style
   attributes of the target motor-node's reciprocal element *as well as all
   the descendants*. This means that instead of a single
   number-to-string-to-number conversion we will now perform 1000 conversions,
   which is *N times more memory and CPU cost* for the conversions where N
   is the number of descendants in the scene graph. This forces me to forfeit
   the benefits of nested-DOM with preserve-3d.
   5. Having to make a second non-nested tree *makes the code more complex*.
   6. Having to multiply transforms and opacities manually *makes the code
   more complex*. If I'm only using DOM, it makes great sense to use the
   nested approach with preserve-3d in order to take advantage of numerical
   caching in the HTML/CSS engine in order to avoid number-to-string-to-number
   conversion whenever possible, and to not have to worry about manually
   multiplying transforms and opacities.
   7. Having to mirror the user's content from the original tree into the
   rendered tree *makes the code more complex*.
   8. Having to mirror the user's content from the original tree into the
   rendered tree shows the* non-consideration of opacity flattening for CSS
   Principles.* The user cannot use CSS selectors reliably anymore due to
   my library having to output separate structures, and the user will now be
   encouraged to use IDs everywhere when they didn't necessarily have to
   before. There will be frustration from selectors not working. For example,
   someone might write code like `<motor-node><p
class="description"></p></motor-node>`
   and may want to write CSS like `motor-node > .description {...}` but that
   selector will fail because it will not select the actual content that is
   being rendered in the second tree that my library will generate.
   9. Code that is more complex (and happens to also be more ugly)
means *higher
   surface area for bugs* and *higher maintenance cost*.

This is all because of opacity flattening.

TLDR: The "legacy" behavior lets us write nested 3D scenes in which
the *HTML/CSS
engine performs transform and opacity multiplications natively from
numerically cached values without (potentially massive) amounts of
number-to-string-to-number conversions*, prevents my library from having to
mirror end-user content when designing custom elements for the purposes of
creating 3D scene graphs, which prevents all the memory and CPU cost I just
mentioned. The "legacy" implementation already allows for the flattening of
an element into a new 3D context by simply applying `transform-style:flat`,
it doesn't need to be automatic. The new opacity behavior is a *second* way
to achieve the same thing that we can already do if we desire, and having
it be automatic only limits developer freedom.

The change to opacity is a regression to 3D programming as far as CSS3D
goes when opacity is involved, and the proper solution is to spec both
legacy-like behavior so opacity can work in a 3D sense, as well as the new
flattening opacity behavior. The current solution -- which forces opacity
to always be flattening -- simply does not work well with nested 3D scenes
that contain non-leaf content, and shows missing consideration for the 3D
future of the web at a time when things like VR are becoming household
names.

---

Please, please, please, Matt, Tien, Simon, Rik, Tab, Chris, Philip, and
anyone else working on the browser implementations, please don't ship
opacity flattening (or revert the changes in Chrome 53) until there's a
spec'd way to also have 3-dimensional opacity similar to the current
behavior in Chrome 52 that you are all calling "legacy". Please!

Sincerely,
- Joe

*/#!/*JoePea

On Fri, Sep 23, 2016 at 6:08 PM, /#!/JoePea <trusktr@gmail.com> wrote:

> I've been reading, but have been busy, and am involved with this only on
> my free time. I have one more response coming to better detail the pain
> points of the new opacity behavior and why the proposed solutions don't
> work.
>
> */#!/*JoePea
>
> On Fri, Sep 23, 2016 at 12:58 AM, Rik Cabanier <cabanier@gmail.com> wrote:
>
>>
>>
>> On Fri, Sep 23, 2016 at 12:39 AM, Matt Woodrow <mwoodrow@mozilla.com>
>> wrote:
>>
>>>
>>> On 22/09/16 11:37 PM, Rik Cabanier wrote:
>>>
>>>
>>> In addition, your proposal *also* affects web content because opacity is
>>> now applied to the group instead of being distributed to the children.
>>>
>>> It's true, but I figured it would be close enough to the old rendering
>>> that the majority of existing content would work with it (assuming they
>>> just want opacity, not specifically opacity distributed to the children)
>>> while also being correct wrt group-opacity and not implementation dependent.
>>>
>>>
>>>
>>>> This thread was started by an author who's content was broken, so it
>>>> seems reasonable to re-visit these assumptions.
>>>>
>>>
>>> Yes, we went over his examples and told him how to fix it (= apply
>>> opacity to the elements)
>>> Since Firefox knows that it's flattening, could it create a warning in
>>> the console and point to an MDN page with more information?
>>>
>>> 1: https://groups.google.com/a/chromium.org/forum/#!msg/blink-d
>>> ev/eBIp90_il1o/jrxzMW_4BQAJ
>>> <https://groups.google.com/a/chromium.org/forum/#%21msg/blink-dev/eBIp90_il1o/jrxzMW_4BQAJ>
>>> 2: https://bugzilla.mozilla.org/show_bug.cgi?id=1250718
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=1278021
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=1229317
>>>
>>>
>>> I still think that applying group-opacity to a subset of a 3d scene is a
>>> reasonable use case (that can't be easily solved without this), and one
>>> that we could support without breaking anything worse than we already plan
>>> to.
>>>
>>> Doesn't look like this is getting much traction though, so I'll probably
>>> just accept the spec change and go ahead with ship flattening of opacity in
>>> Firefox.
>>>
>>
>> Good to hear! This was a great discussion.
>> If you (or anyone else) can come up with a better solution, maybe we can
>> add it to the spec as another value when we integrate Simon Fraser's
>> proposal.
>>
>
>

Received on Tuesday, 27 September 2016 00:00:49 UTC