<model> explainer update

Hi everyone,

I have an update to the model explainer as initially posted by us back in 2022! I would love for folks to read it ahead of TPAC so we have a chance to reacquaint everyone with what <model> is, what kind of challenges it faces, and some potential ways of addressing those problems.

It’s not my intention to set anything in the explainer in stone, but to get enough detail in one place for people to agree / disagree with. If you get the chance between now and TPAC, please take a look through the document (and the demo!) to see the refinements to the proposal. There’s a PR out for both to land in main, but you can read from the branch here:
https://github.com/immersive-web/model-element/blob/explainer_demo/explainer.md
Additionally, there is an interactive explainer demo that covers the broad context that <model> is attempting to cover- it’s also in the PR to get “officially” hosted, but I have a copy of it up here: 
https://zachernuk.neocities.org/2024/model_explainer/ 
It’s intended to be viewed by scrolling on a computer, but should work on a phone as well. 

I look forward to further discussion on all of this next week!
Many thanks,
Brandel

What’s new:

Tighter limitations on v1 functionality

As an initial candidate, I feel like <model> shouldn’t encode interactive state beyond that presented via an animation timeline or camera controls. 
Update to camera / view controls

The first proposal’s use of “camera” feels like it was too limited. entityTransform supports a DOM-native API for constructing a more sophisticated view without adding too much extra complexity. The API is also synchronous (even if the view updates are managed out-of-process) - more about that below.
Bounding box information

Used in service of understanding how to set up a view of a model, in the event that the contents aren’t known by the author already.
environmentmap attribute and events

Important for an adequate level of visualization control, I feel like applying one IBL is a “good-enough” start to allow authors to present model media in different contexts - and means model files are not impairing their ability to display correctly in a mixed-reality view. 
Reducing the use of Promises

While many APIs like HTMLMediaElement and webGL/webGPU offload activity to other processes, that’s not always a good reason to use a Promise-based API. Particularly with attributes like view pose and animation time, a reasonable answer now is likely better than a perfect answer at some unknown time in the future. 
Manipulation terminology change

The discussed automatic controls were previously known as “interactive’ mode, which was noted to be an ambiguous term. Clarifying that it’s related less to a stateful interactivity and more to a mode of interaction on a “stage” in an “orbit” mod is intended to differentiate this.
Removal of audio

While audio is part of some 3D model asset types, it’s a capability that is already available through existing APIs, so it’s not a critical part of an initial implementation to provide that functionality.

Received on Wednesday, 18 September 2024 22:41:38 UTC