- From: John Foliot <john@foliot.ca>
- Date: Wed, 12 Jun 2013 21:54:08 -0700
- To: "'Duncan Bayne'" <dhgbayne@fastmail.fm>, <public-restrictedmedia@w3.org>
Duncan Bayne wrote: > > John, > > Thanks for your detailed reply. Happy to oblige. The accessibility folks *are* watching this at the W3C (which is a good reason to keep it inside of the W3C IHMO) > > This is the area of accessibility tech. that I'm more familiar with, > and > it's the core of my accessibility concern w.r.t. CDMs. My > understanding > is that text protected by a CDM will not be accessible through the DOM, > and thus will be inaccessible to screen reading tools. I am curious what leads you to this understanding? Content, any content, that is encrypted but not decrypted will be "inaccessible" - it doesn't matter the technology stack any given user is using on their host machine. Conversely, content that is decrypted and "running" in the browser is, by virtue of how the browsers work, being rendered into the DOM, and thus, being communicated via the Accessibility APIs to those tools that need that API access to work (i.e screen readers). The "accessibility" work happens after the decryption happens, and if it can be rendered in the browser, it can be exposed to AT. Our questions about that at the Face-2-Face were expressly looking at the edge-cases and scenarios where this *might* be a problem. No significant problems emerged during those discussions, so from an accessibility perspective, keep working on what you are working on, and come back when you reach the next step. That next step is to have a 'working' spec that implementations can be built against, and then like any other technology development, lets test, feedback, iterate, and get it done right - which includes the accessibility piece. > > The CDM can choose to provide a clear-text stream for use by such a > device, but in the case of text, wouldn't that be exactly equivalent to > the protected content in the first place? Possibly, and that what was driving a lot of the questions being asked at the F2F. Most specifically, if a content provider provides the full transcript of a movie, what (exactly) is the difference between that transcript and, say, a paperback "novel" of Ironman 3? Those content providers are increasingly being mandated by legislation to ensure supporting content such as "transcripts" be provided, and so it is not a question of "if" they wake up to this fact, but "when". Thing is, we're already thinking ahead to that, (and not to beat the horse completely to death) because this work is happening inside of the W3c, not only do we get to ask, we get to influence. So, once again, the content owners will have 2 choices - "package" that supporting content inside of the media wrapper (mp4, WebVM, whatever), and when the content is decrypted, so too that supporting content. That seems both obvious and simple, and nothing with EME/CDM/DRM has surfaced to refute that simplicity - but bring forward evidence to the contrary, please. The other choice is the out-of-band delivery of that content: again, we asked and from a *technical* perspective, it *might* be a bit more complex, but there has been zero demand to date for that. We even explored the real possibility that the media would be un-encrypted, but that possibly the support content (sign-language interpretation for Picture in Picture for example) *was* encrypted because it was being produce (under license) by a third party provider who wish to protect their revenue stream - how would that work, what if any technical barriers might exist, etc. etc. Convince me that this level of accessibility engagement would happen outside of the W3C - I've simply never seen it, and can find little-to-no proof of it ever happening elsewhere. (With no disrespect to other web-standards bodies - IETF for example - I don't think they have this level of accessibility engagement in any of their work, but happy to be proven wrong). > It seems unlikely to me that > such a solution would seem acceptable to content providers, but as > you've correctly pointed out, you have far greater experience & > knowledge in this area. Yup, that was one of our questions too. Seems the content providers currently are asleep, but we're not <smile>, and because we can engage with the engineers, they too are now aware of what we potentially see coming down the road, from our perspective and expertise. The engineers are now aware, they are thinking about that, and so we are in, if not a great place, and informed and aware place, with potential designs and strategies being formulated. All thanks (in my mind) to the fact that this is how we do standards at the W3C. > > I mentioned screen-readers because they are the assistive technology > with which I'm most familar. And, as I explained above, I'm still a > little uncertain as to how screen-reader integration with EME / CDM - > protected text (not video or audio content!) - would work in practice. Hopefully I have done something to help clarify those questions. > > Even given that concern, I'm quite happy to amend my statement to: > > "The *sole* purpose of EME is to interop with closed-source proprietary > blobs called CDMs. These will most assuredly not be available to all > people regardless of hardware, software, network infrastructure, and > geographical localtion. They will probably not cater for those who > speak non-mainstream languages." > > Would that satisfy you? It's not about satisfying me, it's about respecting the real barriers that Persons With Disabilities (PWD) face every day, in life and on the web. It's about not using them, and their conditions, to drive a political agenda. I think you've heard that from me loudly enough on that, that I won't further belabor the point. As for why I believe that work on this technology inside of the W3C is good - unpopular as it is to some, and as fraught with potential pain points as others are gloomily suggesting... despite all of that, I remain convinced that working on this inside of the W3C is the best place for it to happen for PWD, because here, at least, they have a real voice and real commitment to not leave *them* behind. It's about guiding the outcomes (not dictating them), and about ensuring that everyone's concerns are heard and met. Geography and internationalization? There is a group inside of the W3C that can provide input and guidance there. Hardware and software? The browsers *are* the software, and increasingly, the OS too. They work inside of the W3C, so it's a good place for us all to work - chasing them out means they go off and work elsewhere, or on their own. I can't see that being a good thing, in any scenario. Making this work inside of "Open Source" is only politically difficult, not technologically so. We need to respect that thinking, but we cannot let it be the only way of thinking. Have I missed anything? JF
Received on Thursday, 13 June 2013 04:54:47 UTC