Re: WebVR and DRM

Florian: Please see EXT_protected_textures.txt
<https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_protected_textures.txt>
and EGL_EXT_protected_content.txt
<https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_protected_content.txt>
for more background on related GL extensions. In short, yes there's
hardware support, but it's fairly flexible. Basically, the extensions allow
creating and using textures and images that are writable but not readable,
and this is hardware enforced along further stages of the pipeline when the
feature is in use. So the application should be able to create an arbitrary
surface that displays the content, and integrate it with the 3D scene in
some ways. For example, doing occlusion via masking would be possible (i.e.
allowing avatars to move around in front of the screen), and I think also
postprocessing such as color changes, but doing general room illumination
based on screen content would generally not work since you can't read the
image in an arbitrary shader. The example you gave earlier of encoding
metadata in pixels would not work as-is, but something similar should be
possible by using multiple distinct tracks. For example, I think it is
already supported to use a subtitle track alongside a protected video with
synced timing.

There's no inherent hardware side restrictions as far as projections / 3D
tracking etc are concerned. The limitations mentioned earlier in the thread
would generally be on the browser software/implementation side assuming
that it just supports specific layer setups from an API point of view, but
that would be modifiable without hardware changes.

Please keep in mind that all this would be intended as an additional and
optional way to do a video layer. It should still be possible to do
whatever you like with your own videos in plain WebGL. If the direct video
layer support doesn't do what you want you're free to completely ignore it.
It's supposed to be a convenience to support simple use cases more easily
(and possibly with improved efficiency if the browser can make assumptions
about how the images get used), and the EME/DRM features are for use cases
where content owners would not otherwise be willing to use web technology
at all for providing their content. I assume that implementers will
generally be open to reasonable suggestions how this could work better.

Personally I'd love it if it would be possible to build something like a
shared video player that lets people watch a third-party-supplied movie
together with custom interactions on top of it. I think this would
potentially be technically possible via an EME-enabled video layer, though
I expect that doing so in practice would be difficult to do in a way that
satisfies all the stakeholders.

Disclaimer: this is all from my point of view, I'm not speaking for anyone
else.

On Wed, Jul 12, 2017 at 2:26 AM, Florian Bösch <pyalot@gmail.com> wrote:

> @Brandon
>
> It's my understanding that on platforms with a hardware decryption module,
> all the decrypting and displaying is done directly by the hardware.
> Wouldn't any capability such as projections, color changes, masking,
> rotoscopy, metadata, 3d tracking etc. require the hardware decryption
> module to support these capabilities? What do you have to change to make
> that happen? Do you have to ship new hardware? Does it suffice to deploy
> new drivers?
>

Received on Thursday, 13 July 2017 07:30:23 UTC