Re: [cloud browser] minutes - 17 August 2016

Hi Colin,

​>
At-least the way i see it: on TV you will use applications instead of
browsing through documents. You will not do a google search but start an
application from a portal.​

H
​mmm... perhaps then the name "Cloud Browser" is inaccurate. When you say
'application', are you referring to a "web app" or a native app that
depends on network connectivity (similar to mobile apps)? While the need to
ensure accessibility is equal for both, this statement certainly causes me
to think differently. For example, consider a Real Estate 'site' that
features multiple photos of different properties for sale (I can point to
Zillow.com as one such, but there are numerous others out there). I see a
huge desire for sites like that to be able to show photos on demand of
property listings, and outputting the photos on a large screen for the
whole family to view and discuss (this is a use-case, right?). Is the
expectation then that Zillow would also create an 'app' for delivery to the
big-screen environment, or would the end-user simply fire-up their 'cloud
browser' and go to zillow.com?
​
​>
I personally don't like to differentiate between types of disabilities
(though fine to make use cases). The assumption that auditory and cognitive
impairments are the responsible for the content producer is incorrect. Just
to provide an example of both. There is a technical challenge to provide
captions on a video through the cloud browser.
​
Thanks for this. I agree that the auditory and cognitive groups need to be
represented as well (and I did outline their overarching issues), however
the impacts on those user-groups are different than the blind and mobility
groups as far as what we are discussing today (at least from what I can
tell). Yes, the cloud browser will need to support captions, but unless the
content author provides them, there is nothing to support... likewise, good
page, site, and UI architecture is critical for some users with cognitive
disabilities, but again, much of that is the function of the site creator:
the browser (cloud or otherwise) can only render what the author has
provided.

​>
There is a technical challenge to provide captions on a video through the
cloud browser. Normally (in broadcast) the captions are in-band [2] which
could be - for example - rendered by the middleware. The cloud browser need
a way to provide this as-well.
​
I would be curious to better understand the 'technical challenge' of
rendering captions via the cloud browser as I have been (perhaps falsely)
working under the assumption that the cloud browser would support the full
HTML5 specification, which would include the <video>, <audio> and <track>
elements (as well as supporting the API for extracting in-band captions
from the video wrapper).
​ Current browsers are rendering the time-stamped text (whether in-band or
out-of-band, in either the WebVTT or TTML markup languages - with a
preference for WebVTT)) as part of the browser function, and I am confused
about the reference of 'middleware' in your example. (I thought the browser
was the middleware...)


I revisited the Intro page you referenced (your [1]), and my concern is
that the diagram shows AT referencing the "Orchestration", when in fact I
suspect that it actually needs to connect to the RTE currently shown as
part of the video client, as what the Assistive Technology outputs
​ from the DOM​
is simply an alternative 'rendering' that is dependant on the Accessibility
API(s) of the various OS platforms (RTE's).

Based upon this, the solution will
​probably
 need to look the same as we have in other form-factors/OSes today: AT
tools that are built directly into the OS/RTE directly (VoiceOver,
VoiceView for Fire TV Devices, etc.), or the OS/RTE is one that supports
3rd party software (JAWs, ORCA, ZoomText, etc.) and hardware (complete with
input ports on the device for tools like alternative keyboards and braille
displays). I am currently unaware of a tool or tools today that ship with
its own onboard OS/RTE that could be inserted into this diagram. (There are
some standalone tools such as Braille note-takers and e-book readers
​ - http://www.daisy.org/tools/hplayback -​
 that are 'self-contained' but I am not sure whether they could be
connected to the 'Orchestration' you describe in any effective manner today
-
​but I don't know, ​
TBD)

I look forward to further discussing this at TPAC - see you there!

JF

On Fri, Aug 19, 2016 at 3:40 AM, Meerveld, Colin <C.Meerveld@activevideo.com
> wrote:

> Hi John,
>
> Thank you for your input. Very helpful! I believe accessibility has a high
> priority in this Task Force. The Cloud Browser could never be a succes or
> accepted standard if there is no way of make it accessible to people with a
> functional disability. We have some initial ideas stated in the
> introduction to cloud browser [1]. That said, i think there is a different
> with a regular browser or local browser as it is referred in the
> introduction. We are part of the W3C Web and TV group. Making the web work
> on TV  would be a subject on its own. It is not as simple as plugging in a
> braille display to a smart tv and it will work. To be able to let assistive
> technology work would require liaison with a lot of other organisation. The
> way it currently works is that each solution (e.g. stb middleware)
> implement there own way of accessibility. In addition the model of using
> the web on TV is different. At-least the way i see it: on TV you will use
> applications instead of browsing through documents. You will not do a
> google search but start an application from a portal. Obviously if a cloud
> browser implementer would like to provide a browser to surf the web that
> should be possible in our architecture.
>
> I personally don't like to differentiate between types of disabilities
> (though fine to make use cases). The assumption that auditory and cognitive
> impairments are the responsible for the content producer is incorrect. Just
> to provide an example of both. There is a technical challenge to provide
> captions on a video through the cloud browser. Normally (in broadcast) the
> captions are in-band [2] which could be - for example - rendered by the
> middleware. The cloud browser need a way to provide this as-well. An other
> challenge could be to enable a seamless experience. It could quite hard for
> someone with an cognitive impairment to learn all the different ways to
> navigate through the applications. A solution could be to provide an
> overarching way through the cloud browser which makes this much easier.
> This is far from trivial in the cloud browser architecture.
>
> I also attend TPAC and happy to chat on this topic. In our latest meeting
> we decided to focus on the use cases state, session and control but will
> work on other use cases after TPAC as the TF will probably be extended for
> 6 months.
>
> [1] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_
> TF/Introduction_cloud_browser#Accessibility
> [2] https://dev.w3.org/html5/html-sourcing-inband-tracks/
>
> On 18 Aug 2016, at 19:59, John Foliot <john.foliot@deque.com> wrote:
>
> > Alexandra will contact John F on accessibility use cases.
>
> Hi All,
>
> I had provided the following previously: https://www.w3.org/2011/webtv/
> wiki/Main_Page/Cloud_Browser_TF/UseCases#Accessibility
>
> By my reading, and following along as best I can, I see that to-date you
> have been focusing on how the dumb-screen connects to the cloud browser
> from an architectural/technical basis. From an accessibility perspective
> however, the larger question is: how does the *user* interact with the
> cloud browser? I have not
> ​yet ​
> seen any discussion on how the end user inputs/interacts with the cloud
> browser.
>
> For example, if the end-user wants to do a Google search on their cloud
> browser (rendered on their 60" big-screen TV), how do they input the search
> term
> ​ text​
> ? Beyond the thorny question of text input, how else does the end-user
> "click" a button on the rendered web-page? Without a traditional keyboard
> and pointing device mechanism (normally associated with more traditional
> computing environments), or a 'touch interface' (tablets/mobile devices)
> interacting with the content is going to be left to what? An on-screen
> keyboard? (and how/who supplies that?)
> ​ How would users interact with tab-focusable content on a webpage in the
> cloud browser?
>
>
> Returning to accessibility: traditionally, when we talk about
> accessibility considerations, there are 4 basic categories of disability
> ​ that we focus on, and each of those categories offer a range of
> impairment (it's never black or white)​
> .
> ​
>  They are:
>
>    - Visual disability (which can range from complete blindness to low
>    vision and color-blind issues)
>    - Auditory disability (which can range from complete deafness to
>    varied forms of hearing loss, or configurations or scenarios that do not
>    support audio output - think stand-alone kiosks, or environments where
>    audio can be a problem such as in a library - shhh - or a steel foundry -
>    "WHAT??? I CAN'T HEAR YOU...")
>    - Mobility disability (again ranges from complete quadriplegia or
>    amputees, to lacking fine-motor control due to arthritis, tremors,
>    ​or ​
>    temporary conditions like having a cast on your hand)
>    - Cognitive disabilities (from Down's Syndrome
>    ​,​
>    to ADHD or dyslexia)
>
> ​Equally, some users may have multiple disabilities across the different
> categories, and I often reference Seniors as one such class of user: as we
> age​ we may start to exhibit deficiencies such as reduced vision (seniors
> need reading glasses), hearing ​(seniors tend to need hearing aids more
> often), mobility (arthritis), and even cognition (Alzheimer's). This makes
> seniors an interesting and important use-case in and of themselves.
>
>
> From the perspective of this group's activities, for now we can presume
> that issues affecting users with either Auditory or Cognitive impairments
> will likely be addressed by the content producer (i.e. ensure that captions
> are provided for multi-media content, or the ability to personalize or
> simplify individual pages for those with cognition issues, etc.). However,
> issues surrounding
> ​V​
> isual disabilities and
> ​M​
> obility disabilities will be directly impacted - or will directly impact
> your ability to deliver this in an accessible fashion.
>
> Blind users are dependant on screen reading software to be able to
> interact with web content. Screen readers are either third-party software
> tools (JAWs, NVDA, others on Windows; ORCA, others on Linux) or are
> provided as part of the OS (VoiceOver on Mac and iOS, TalkBack on Android,
> Narrator on Windows/Windows phone, VoiceView for Fire TV Devices
> ​(​
> https://www.amazon.com/gp/help/customer/display.html?nodeId=202042100). A
> critical question for this group is how to support this requirement (screen
> reader) in your cloud browser. Who is responsible for supplying that
> functionality? The cloud browser ecosystem, or the end user? If it is the
> end user, how (exactly) do you (we?) envision that happening? Will there be
> an extended requirement for an external 'dongle' attached to your
> dumb-screen that the end user will require to actually interact with the
> cloud browser?
> ​ (This would probably mirror the FireTV dongle solution AFAIK)​
>
> ​For users who are low-vision, some will use the built-in Zoom controls we
> see in today's modern browsers (and hopefully the cloud browser(s) being
> discussed here). However, in many cases the zoom functionality offered by
> the browser is insufficient for the individual, at which point they too
> will use a 3rd party tool (ZoomText, MAGic, etc.) that not only will
> increase the magnification significantly beyond the 200% threshold
> referenced in WCAG (https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-
> audio-contrast-visual-presentation.html) but these tools also allow the
> end user to change color palettes to varying degrees as well, to address
> other visual deficiencies (video: https://www.youtube.
> com/watch?v=afny3NMZBnI - and worth watching...) ​
>
> For users with mobility impairments, they may not be able to interact with
> a standard TV remote. A means and method of adding alternative input
> devices will also need to be considered, which could range from
> speech-input (not sure how to do that with a cloud-browser today as it
> requires a microphone), to alternative switching devices, from keyboards to
> sip-and-puff mechanisms or alternate switching controls.
> (See the foot-mouse here: http://www.turningpointtechnology.com/Sx/
> AltMice.asp or sip-and-puff here: http://www.orin.com/access/sip_puff/)
>
>
> I have thought about this a bit - off and on - and it seems to me that at
> a minimum, the larger dumb-screen will require an input interface port
> (USB?) that would allow a dongle of sorts to be inserted, and by that means
> alternative interactions (and perhaps even assistive technology) could be
> introduced into the mix. How to wire that all up architecturally and
> technologically I'm not quite sure, but I think this is or will be the
> ultimate accessibility challenge this group will need to tackle.
>
> I hope this helps (I can add this to the wiki as well), and I am happy to
> continue here. I will also be in Lisbon for TPAC, and perhaps if others are
> there we can chat about this more. There will be other accessibility SMEs
> in attendance then as well, and so either formally or informally I think
> there will be an opportunity for further discussion that week if desired.
>
> JF
>
>
> On Wed, Aug 17, 2016 at 10:54 AM, Kazuyuki Ashimura <ashimura@w3.org>
> wrote:
>
>> available at:
>>   https://www.w3.org/2016/08/17-webtv-minutes.html
>>
>> also as text below.
>>
>> Thanks a lot for taking notes, Nilo!
>>
>> Kazuyuki
>>
>> ---
>>    [1]W3C
>>
>>       [1] http://www.w3.org/
>>
>>                                - DRAFT -
>>
>>                       Web&TV IG - Cloud Browser TF
>>
>> 17 Aug 2016
>>
>> Attendees
>>
>>    Present
>>           Colin, Alexandra, Steve, Nilo, Kaz
>>
>>    Regrets
>>    Chair
>>           Alexandra
>>
>>    Scribe
>>           Nilo
>>
>> Contents
>>
>>      * [2]Topics
>>      * [3]Summary of Action Items
>>      * [4]Summary of Resolutions
>>      __________________________________________________________
>>
>>    <scribe> scribe: Nilo
>>
>>    The architecture chapter is now finalized
>>
>>    <alexandra>
>>    [5]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>>    F/Architecture
>>
>>       [5] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>> F/Architecture
>>
>>    we should start to review the chapter and provide comments on
>>    clarity etc.
>>
>>    Review it in the next two weeks
>>
>>    Either send public comments or correct minor mistakes directly
>>    in the wiki
>>
>>    We'll discuss the changes in the next meeting, two weeks from
>>    now.
>>
>>    Chapter should be finalized before TPAC
>>
>>    Colin: some of the terminology seems interchangeable
>>    ... look more closely at the differences between the text and
>>    images, e.g., what does "input data" mean?
>>    ... This way those who use our pictures (e.g., GSMA) will not
>>    have pictures which are not properly explained
>>
>>    Alexandra has also updated the TF page
>>
>>    <alexandra>
>>    [6]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>>    F
>>
>>       [6] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF
>>
>>    Added Colin's text on introduction to the cloud browser.
>>
>>    <alexandra>
>>    [7]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>>    F/Introduction_cloud_browser
>>
>>       [7] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>> F/Introduction_cloud_browser
>>
>>    Colin notes that this is his perspective and would appreciate
>>    more input from others.
>>
>>    cloud browser should not have any special APIs. This way
>>    applications do not have to change.
>>
>>    Colin's second point is that the client should have a small
>>    addition, the RTE, which needs to communicate with the
>>    orchestration
>>
>>    <colin> A Cloud Browser is unspecified. i.e. it could be
>>    everything from a html5 enabled browser to an android OS
>>
>>    <colin> The client device should be agnostic only a small
>>    addition is needed called a RTE
>>
>>    <colin> The RTE only provide input but doesn't interpreted it
>>
>>    <colin> The RTE communicated with the Orchestration
>>
>>    Aexandra notes that these points seem like requirements.
>>
>>    The first point could be stated as a requirements: the browser
>>    should not be extended with any APIs.
>>
>>    The signaling between the RTE and Orchestration needs to be
>>    standardized.
>>
>>    Also, we should focus on the gaps in W3C specs to see if gaps
>>    exists. For example, a vibrate API might not work if the
>>    interaction is synchronous.
>>
>>    Another example might be the determination of quality metrics
>>    at a client rather than at the browser.
>>
>>    Colin suggests looking at W3C specs to identify cases where
>>    these might not work for certain use cases.
>>
>>    There might be gaps found for accessibility in a cloud browser
>>    environment.
>>
>>    Alexandra willl contact John F on accessibility use cases.
>>
>>    <alexandra>
>>    [8]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>>    F/UseCases
>>
>>       [8] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T
>> F/UseCases
>>
>>    Colin suggests splitting this so that at TPAC we finalize the
>>    architecture while after TPAC we concentrate on completing the
>>    use cases and requirements
>>
>>    Alexandra also proposes finalizing the first 4 use cases
>>
>>    Alexandra sill also provide a status document for presentation
>>    to TPAC
>>
>>    Kaz mentioned if WebTV could meet with WoT IG at TPAC.
>>
>>    (all are interested)
>>
>>    Kaz will get back to the WoT IG and suggest we have a joint
>>    meeting between the Cloud Browser TF and the WoT IG.
>>
>>    Colin asked in the concept of split browser could also be
>>    discussed with the TAG at TPAC. will get back to the WoT IG and
>>    suggest we have a joint meeting between the Cloud Browser TF
>>    and the WoT IG.
>>
>>    Kaz will generate an agenda for the Web TV IG meeting
>>
>>    No further items for discussion. So meeting closed. Meet again
>>    in 2 weeks.
>>
>> Summary of Action Items
>>
>> Summary of Resolutions
>>
>>    [End of minutes]
>>      __________________________________________________________
>>
>>
>>     Minutes formatted by David Booth's [9]scribe.perl version
>>     1.147 ([10]CVS log)
>>     $Date: 2016/08/17 15:52:44 $
>>
>>       [9] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
>>      [10] http://dev.w3.org/cvsweb/2002/scribe/
>>
>>
>>
>
>
> --
> John Foliot
> Principal Accessibility Strategist
> Deque Systems Inc.
> john.foliot@deque.com
>
> Advancing the mission of digital accessibility and inclusion
>
>
>


-- 
John Foliot
Principal Accessibility Strategist
Deque Systems Inc.
john.foliot@deque.com

Advancing the mission of digital accessibility and inclusion

Received on Friday, 19 August 2016 16:41:49 UTC