Re: request for guidance: centralized extensibility for HTML5 using X3D graphics

Hi Cris,

> Here's an interesting thought experiment. X3D is an outgrowth of VRML, which was designed at a time when embedding 3D in a plugin in a web browser seemed like the best idea. We know now the pitfalls that causes (poor page integration, plug-in ghetto, the need to do everything, including scripting and image loading, in the plug-in, etc.). If we were designing a set of 3D nodes for the web today, how would we proceed differently? For instance, would we have EAI or SAI interfaces? Certainly not, we would use the DOM and the scripting capability already built into the web browsers. 

100% right from my person point. We should eliminate the interface-layer and incorporate everything in the DOM. This is the core idea behind the X3D/HTML integration effort and X3DOM. 

> Designing 3D as elements in the DOM would have tons of resources available that VRML and X3D did not have. Not only a scripting language and DOM, but CSS (including CSS Animation) an event model, XHR, Image loading and caching, video/audio, application cache and web workers, just to name a few. 

Again. I agree with most points. All this UA-feature, event model and even CSS Animation, should be available for (X)3D authors. 

However, I would at least provide a node or mechanics to include external 3D subtrees. The X3D-Inline node does this automatically. 
3D models tend to be huge compared to usually HTML-pages and we need a mechanism to partition and progressive load subparts.  You can, also with X3DOM today, use multiple XHR and extent your DOM but this has two major drawbacks.
First It makes the handling of large data-files complicated since subparts can not automatically and progressive load further parts and second, forces the UA to push every polygon through the DOM structure. The Inline node gives the UA the freedom to download and optimize the subtree as needed. 

How someone would address and manipulate those scene-parts is another topic (and experimentally solved in X3DOM)

> Just look at CSS Animation alone. It adds a 4x4 homogeneous matrix and the ability to animate it (as well as animating colors, opacity, and other CSS properties). This eliminates the need for transforms and the entire set of X3D interpolators.

I agree that we need some declarative method for animation and transition. The UA should be able to calculate the relevant results and not the app-devloper should be forced to do everything in javascript. 
However, at least I can not see that every functionality (e.g. CoordinateInterpolator) can be done bey CSS Animation right now.
But maybe It could be extended to handle all aspects. ROUTEs ( and "Data Flow" programming in general) are a powerful concept but far away from everything you have in HTML

> So what nodes would you need? You'd of course need a structural node (groups), transformable via CSS. You'd need a representation of mesh data and a set of material nodes to apply colors, textures and lighting. This could be a very simple set of nodes if you do what we did in WebGL, support only shader based materials. So you'd maybe just need a few nodes to define a set of shaders and the parameters sent to them (probably CSS based).

I totally disagree. Using concrete GLSL (or HLSL/CG) shaders in a environment which should evolve and stay for years is not a good idea at all. It works nice e.g. in games which have a fix target hardware (class) and very short lifecycle but for HTML we need a better abstraction. 

1) Graphics-API dependent: GLSL only works on OpenGL (ES) systems. The material system should API agnostic and should be open for other hardware APIs (e.g DirectX) and even software (e.g Raytracer) or server-side implementation. 

2) Render system dependent: A concreate shader works only in one rendering pipeline. If you change the rendering pipeline, even staying on the same API e.g. OpenGL, makes the shader invalid. If we would e.g decide to implement a classic forward render using OpenGL/WebGL in HTML we must stay with this method forever. We could not even add shadows because the shader do not handle shadow-passes. We could never switch to a more modern pipeline (e.g. Deffered Lighting) without breaking all old shaders. HTML 3D rendering could never evolve or switch to more modern pipeline.

3) Low level API: GLSL is a low level HW-shader programming language. It's far away form everything HTML-developer are used to.  GLSL and HTML developer have a very different profile. I don't think every HTML-developer should deal with any 44 transformation matrix and flexible lighting equations.

So we need a declarative material system which allows us to define shadings graphcis-API and rendering-system independent. 
X3D goes in the right direction. It's API agnostic but defines a specific rendering result (= rendering system) which can be achieved with different rendering pipelines.
(There are also nodes for concrete GLSL (and HLSL,CG) shader but this is something we should not include in HTML)

The declarative materialsystem is however outdated in some parts and there where various proposals to include
a new modern declarative material system in X3D. It shouldn't be as fat as MetaSL but should be flexible and powerful
enough to support modern graphics pipelines. 

> Then you'd want some Sensor nodes, which would allow user interaction and would feed resultant events into the DOM Event Model.

Agree. Some form of feedback from the rendering tree. I would even leave the PointingSensors (e.g. TouchSensor) out and would try to use HTML events on the 3D graph.

> GeoNodes, metadata, and even all the primitives (Box, Sphere, etc.) could be easily done in a JavaScript library and kept out of the core node set.

The Primitives are not really needed but nice for quick and easy shaping. But you are right. They are not essential. 

> There would also be no need for TimeSensors or other timing or animation related nodes.

There must be some form to describe the animation declarative so the UA can run the transition. 
I personal think that the generic "data flow" design works good for complex scenes but it's open.

> You could also eliminate the complexity of Prototypes, which can be done in a JavaScript library.

Agree. That's what we also proposed in our Profile (

> On top of this core layer you might want physics, higher order surfaces (NURBS, etc.), and maybe particle system nodes. But those could be a later profile.


> What I'm getting at here is that, if designed today, I think 3D for the web would be a much different and much simpler set of nodes than we see in X3D. That's why I say that WebGL is a good first step. It allows us to take the most appropriate steps toward the best 3D solution.

I believe there will be support for a graphics API (e.g. WebGL) and 
declarative 3D in future HTML versions. I personally (as OpenGL coder)
would love to see WebGL in every browser

 Both have there strength and weakness and will coexist like canvas and SVG (as 2D layer) coexist today

The declarative 3D layer looks to me still like a well defined Profile (= subpart) of X3D + some extensions. 
I don't think that X3D is perfect but as ISO and well specified Standard an excellent candidate. 

The world has already to many 3D-Formats and standards. Lets use one in HTML and not define yet another one.

best regards

Received on Wednesday, 24 February 2010 17:18:05 UTC