- From: White, Jason J <jjwhite@ets.org>
- Date: Mon, 13 Jul 2015 18:22:49 +0000
- To: Jeanne Spellman <jeanne@w3.org>
- CC: "public-wai-rd@w3.org" <public-wai-rd@w3.org>, Henny Swan <hswan@paciellogroup.com>, Susann Keohane <skeohane@us.ibm.com>
> On Jul 13, 2015, at 13:57, Jeanne Spellman <jeanne@w3.org> wrote: > > * smart watch that has multi functions like tracking activity or > blood pressure - the device itself may not have an accessible UI but > the user can access the data through a mobile app or Website. The security model for such devices is important. Suppose a user has a medical condition that may require remote monitoring of the health data. Strict control of access to the data is fundamental for privacy, but with the user’s consent, medical or other sources of support should be able to be granted access. This isn’t the only case where accessibility, security and privacy intersect. > * home thermostat which is challenging due to many of the devices > having a multi-function UI of displaying the temp and setting the temp And some of them (the one in my current apartment for example) can be programmed to set the temperature differently depending on the time of day. An issue for the IoT here would take the form of a need to ensure that all of the functionality available via the physical UI on the device is also available through the protocol for remote access and administration. Only by providing parity of functionality does the protocol enable the device to be accessible to those who can’t use the provided UI. > * Example of the automated home where an alert needs to blink the > lights for someone who is deaf. The user needs to be able to program > a pattern of light blinking to have a unique visual indication to > replace different audio alerts. (e.g. the door alarm should have a > different light blinking pattern than the oven timer.) > This is a good example of a case in which the device needs to be programmable via the network protocol in ways that it isn’t via the provided UI. > > Airport sign: A blind person is looking for a restaurant. Web App of the airport. Sign is in a location it knows, and has data about finding the restaurant. The data on the sign location could be sent to a user’s navigation app on their phone. Microsoft has a new glasses that will do in-building navigation. Buzz a wearable for someone who is deaf, or give text directions to a wearable. > And the protocol could offer information that isn’t presented on the sign visually, but which would help such a person to become oriented to the vicinity. > > * Important to have simplified interfaces for controllers with clear > resets when a user disables something important. (e.g. an elderly > person accidentally turning off the heat in a smart house on a cold > winter night. ) > If it’s easy to customize the Web interface of an application that supports the device’s protocol, simplification of the user interface should be possible - potentially to meet the needs of an individual user who only requires the ability to adjust certain controls. Symbols, constrained language, etc., could be used in the interface, which would be provided by a third party - not the device or Web application developer. > We think we are just starting to skim the surface. I'm glad to see there is a WAI group that is looking at these issues. I’m sure there are many further undiscovered possibilities awaiting research in this area. ________________________________ This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited. Thank you for your compliance. ________________________________
Received on Monday, 13 July 2015 18:23:20 UTC