Re: [cloud browser] minutes - 17 August 2016

Hi John,

Thank you for your input. Very helpful! I believe accessibility has a high priority in this Task Force. The Cloud Browser could never be a succes or accepted standard if there is no way of make it accessible to people with a functional disability. We have some initial ideas stated in the introduction to cloud browser [1]. That said, i think there is a different with a regular browser or local browser as it is referred in the introduction. We are part of the W3C Web and TV group. Making the web work on TV  would be a subject on its own. It is not as simple as plugging in a braille display to a smart tv and it will work. To be able to let assistive technology work would require liaison with a lot of other organisation. The way it currently works is that each solution (e.g. stb middleware) implement there own way of accessibility. In addition the model of using the web on TV is different. At-least the way i see it: on TV you will use applications instead of browsing through documents. You will not do a google search but start an application from a portal. Obviously if a cloud browser implementer would like to provide a browser to surf the web that should be possible in our architecture.

I personally don't like to differentiate between types of disabilities (though fine to make use cases). The assumption that auditory and cognitive impairments are the responsible for the content producer is incorrect. Just to provide an example of both. There is a technical challenge to provide captions on a video through the cloud browser. Normally (in broadcast) the captions are in-band [2] which could be - for example - rendered by the middleware. The cloud browser need a way to provide this as-well. An other challenge could be to enable a seamless experience. It could quite hard for someone with an cognitive impairment to learn all the different ways to navigate through the applications. A solution could be to provide an overarching way through the cloud browser which makes this much easier. This is far from trivial in the cloud browser architecture.

I also attend TPAC and happy to chat on this topic. In our latest meeting we decided to focus on the use cases state, session and control but will work on other use cases after TPAC as the TF will probably be extended for 6 months.

[1] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/Introduction_cloud_browser#Accessibility

[2] https://dev.w3.org/html5/html-sourcing-inband-tracks/


On 18 Aug 2016, at 19:59, John Foliot <john.foliot@deque.com<mailto:john.foliot@deque.com>> wrote:

> Alexandra will contact John F on accessibility use cases.

Hi All,

I had provided the following previously: https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/UseCases#Accessibility


By my reading, and following along as best I can, I see that to-date you have been focusing on how the dumb-screen connects to the cloud browser from an architectural/technical basis. From an accessibility perspective however, the larger question is: how does the *user* interact with the cloud browser? I have not
​yet ​
seen any discussion on how the end user inputs/interacts with the cloud browser.

For example, if the end-user wants to do a Google search on their cloud browser (rendered on their 60" big-screen TV), how do they input the search term
​ text​
? Beyond the thorny question of text input, how else does the end-user "click" a button on the rendered web-page? Without a traditional keyboard and pointing device mechanism (normally associated with more traditional computing environments), or a 'touch interface' (tablets/mobile devices) interacting with the content is going to be left to what? An on-screen keyboard? (and how/who supplies that?)
​ How would users interact with tab-focusable content on a webpage in the cloud browser?


Returning to accessibility: traditionally, when we talk about accessibility considerations, there are 4 basic categories of disability
​ that we focus on, and each of those categories offer a range of impairment (it's never black or white)​
.
​
 They are:

  *   Visual disability (which can range from complete blindness to low vision and color-blind issues)
  *   Auditory disability (which can range from complete deafness to varied forms of hearing loss, or configurations or scenarios that do not support audio output - think stand-alone kiosks, or environments where audio can be a problem such as in a library - shhh - or a steel foundry - "WHAT??? I CAN'T HEAR YOU...")
  *   Mobility disability (again ranges from complete quadriplegia or amputees, to lacking fine-motor control due to arthritis, tremors,
​or ​
temporary conditions like having a cast on your hand)
  *   Cognitive disabilities (from Down's Syndrome
​,​
to ADHD or dyslexia)

​Equally, some users may have multiple disabilities across the different categories, and I often reference Seniors as one such class of user: as we age​ we may start to exhibit deficiencies such as reduced vision (seniors need reading glasses), hearing ​(seniors tend to need hearing aids more often), mobility (arthritis), and even cognition (Alzheimer's). This makes seniors an interesting and important use-case in and of themselves.


From the perspective of this group's activities, for now we can presume that issues affecting users with either Auditory or Cognitive impairments will likely be addressed by the content producer (i.e. ensure that captions are provided for multi-media content, or the ability to personalize or simplify individual pages for those with cognition issues, etc.). However, issues surrounding
​V​
isual disabilities and
​M​
obility disabilities will be directly impacted - or will directly impact your ability to deliver this in an accessible fashion.

Blind users are dependant on screen reading software to be able to interact with web content. Screen readers are either third-party software tools (JAWs, NVDA, others on Windows; ORCA, others on Linux) or are provided as part of the OS (VoiceOver on Mac and iOS, TalkBack on Android, Narrator on Windows/Windows phone, VoiceView for Fire TV Devices
​(​
https://www.amazon.com/gp/help/customer/display.html?nodeId=202042100). A critical question for this group is how to support this requirement (screen reader) in your cloud browser. Who is responsible for supplying that functionality? The cloud browser ecosystem, or the end user? If it is the end user, how (exactly) do you (we?) envision that happening? Will there be an extended requirement for an external 'dongle' attached to your dumb-screen that the end user will require to actually interact with the cloud browser?
​ (This would probably mirror the FireTV dongle solution AFAIK)​

​For users who are low-vision, some will use the built-in Zoom controls we see in today's modern browsers (and hopefully the cloud browser(s) being discussed here). However, in many cases the zoom functionality offered by the browser is insufficient for the individual, at which point they too will use a 3rd party tool (ZoomText, MAGic, etc.) that not only will increase the magnification significantly beyond the 200% threshold referenced in WCAG (https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-visual-presentation.html) but these tools also allow the end user to change color palettes to varying degrees as well, to address other visual deficiencies (video: https://www.youtube.com/watch?v=afny3NMZBnI - and worth watching...) ​

For users with mobility impairments, they may not be able to interact with a standard TV remote. A means and method of adding alternative input devices will also need to be considered, which could range from speech-input (not sure how to do that with a cloud-browser today as it requires a microphone), to alternative switching devices, from keyboards to sip-and-puff mechanisms or alternate switching controls.
(See the foot-mouse here: http://www.turningpointtechnology.com/Sx/AltMice.asp or sip-and-puff here: http://www.orin.com/access/sip_puff/)


I have thought about this a bit - off and on - and it seems to me that at a minimum, the larger dumb-screen will require an input interface port (USB?) that would allow a dongle of sorts to be inserted, and by that means alternative interactions (and perhaps even assistive technology) could be introduced into the mix. How to wire that all up architecturally and technologically I'm not quite sure, but I think this is or will be the ultimate accessibility challenge this group will need to tackle.

I hope this helps (I can add this to the wiki as well), and I am happy to continue here. I will also be in Lisbon for TPAC, and perhaps if others are there we can chat about this more. There will be other accessibility SMEs in attendance then as well, and so either formally or informally I think there will be an opportunity for further discussion that week if desired.

JF


On Wed, Aug 17, 2016 at 10:54 AM, Kazuyuki Ashimura <ashimura@w3.org<mailto:ashimura@w3.org>> wrote:
available at:
  https://www.w3.org/2016/08/17-webtv-minutes.html


also as text below.

Thanks a lot for taking notes, Nilo!

Kazuyuki

---
   [1]W3C

      [1] http://www.w3.org/


                               - DRAFT -

                      Web&TV IG - Cloud Browser TF

17 Aug 2016

Attendees

   Present
          Colin, Alexandra, Steve, Nilo, Kaz

   Regrets
   Chair
          Alexandra

   Scribe
          Nilo

Contents

     * [2]Topics
     * [3]Summary of Action Items
     * [4]Summary of Resolutions
     __________________________________________________________

   <scribe> scribe: Nilo

   The architecture chapter is now finalized

   <alexandra>
   [5]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T

   F/Architecture

      [5] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/Architecture


   we should start to review the chapter and provide comments on
   clarity etc.

   Review it in the next two weeks

   Either send public comments or correct minor mistakes directly
   in the wiki

   We'll discuss the changes in the next meeting, two weeks from
   now.

   Chapter should be finalized before TPAC

   Colin: some of the terminology seems interchangeable
   ... look more closely at the differences between the text and
   images, e.g., what does "input data" mean?
   ... This way those who use our pictures (e.g., GSMA) will not
   have pictures which are not properly explained

   Alexandra has also updated the TF page

   <alexandra>
   [6]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T

   F

      [6] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF


   Added Colin's text on introduction to the cloud browser.

   <alexandra>
   [7]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T

   F/Introduction_cloud_browser

      [7] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/Introduction_cloud_browser


   Colin notes that this is his perspective and would appreciate
   more input from others.

   cloud browser should not have any special APIs. This way
   applications do not have to change.

   Colin's second point is that the client should have a small
   addition, the RTE, which needs to communicate with the
   orchestration

   <colin> A Cloud Browser is unspecified. i.e. it could be
   everything from a html5 enabled browser to an android OS

   <colin> The client device should be agnostic only a small
   addition is needed called a RTE

   <colin> The RTE only provide input but doesn't interpreted it

   <colin> The RTE communicated with the Orchestration

   Aexandra notes that these points seem like requirements.

   The first point could be stated as a requirements: the browser
   should not be extended with any APIs.

   The signaling between the RTE and Orchestration needs to be
   standardized.

   Also, we should focus on the gaps in W3C specs to see if gaps
   exists. For example, a vibrate API might not work if the
   interaction is synchronous.

   Another example might be the determination of quality metrics
   at a client rather than at the browser.

   Colin suggests looking at W3C specs to identify cases where
   these might not work for certain use cases.

   There might be gaps found for accessibility in a cloud browser
   environment.

   Alexandra willl contact John F on accessibility use cases.

   <alexandra>
   [8]https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_T

   F/UseCases

      [8] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/UseCases


   Colin suggests splitting this so that at TPAC we finalize the
   architecture while after TPAC we concentrate on completing the
   use cases and requirements

   Alexandra also proposes finalizing the first 4 use cases

   Alexandra sill also provide a status document for presentation
   to TPAC

   Kaz mentioned if WebTV could meet with WoT IG at TPAC.

   (all are interested)

   Kaz will get back to the WoT IG and suggest we have a joint
   meeting between the Cloud Browser TF and the WoT IG.

   Colin asked in the concept of split browser could also be
   discussed with the TAG at TPAC. will get back to the WoT IG and
   suggest we have a joint meeting between the Cloud Browser TF
   and the WoT IG.

   Kaz will generate an agenda for the Web TV IG meeting

   No further items for discussion. So meeting closed. Meet again
   in 2 weeks.

Summary of Action Items

Summary of Resolutions

   [End of minutes]
     __________________________________________________________


    Minutes formatted by David Booth's [9]scribe.perl version
    1.147 ([10]CVS log)
    $Date: 2016/08/17 15:52:44 $

      [9] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm

     [10] http://dev.w3.org/cvsweb/2002/scribe/






--
John Foliot
Principal Accessibility Strategist
Deque Systems Inc.
john.foliot@deque.com<mailto:john.foliot@deque.com>

Advancing the mission of digital accessibility and inclusion

Received on Friday, 19 August 2016 08:41:09 UTC