Re: R24. End user should be able to use speech in a hands-free mode

It appears that we have consensus to replace R24 with two new  
requirements as stated by Bjorn.  We will confirm this in today's  
teleconference.

-- dan

On Nov 23, 2010, at 7:15 AM, Bjorn Bringert wrote:

> To capture both parts of "Not all apps need do have a hands-free mode.
>  Neither do all UAs.", how about adding these two requirements:
>
> 1. "It should be possible for user agents to allow hands-free speech  
> input."
>
> 2. "User agents should not be required to allow hands-free speech  
> input. "
>
> /Bjorn
>
> On Tue, Nov 23, 2010 at 12:02 AM, Robert Brown
> <Robert.Brown@microsoft.com> wrote:
>> Or perhaps "it should be possible to create applications that can  
>> operate in a hands-free mode"
>>
>> Not all apps need do have a hands-free mode.  Neither do all UAs.
>>
>> -----Original Message-----
>> From: public-xg-htmlspeech-request@w3.org [mailto:public-xg-htmlspeech-request@w3.org 
>> ] On Behalf Of Satish Sampath
>> Sent: Monday, November 22, 2010 4:46 AM
>> To: Olli@pettay.fi
>> Cc: public-xg-htmlspeech@w3.org
>> Subject: Re: R24. End user should be able to use speech in a hands- 
>> free mode
>>
>> I also think we should not make it mandatory to use speech in hands- 
>> free mode if the user agent is not enabled for hands-free mode.
>> Many traditional desktop web browsers are not built for hands-free  
>> usage and it doesn't make sense to me that one particular web page  
>> which may use speech input can claim to be hands-free when the rest  
>> of the browser (i.e. the browser chrome, menus, other user interface
>> elements) aren't hands-free.
>>
>> Perhaps the requirement should be "user agents with a hands-free  
>> mode should be able to support speech-input in hands-free mode as  
>> well".
>>
>> Cheers
>> Satish
>>
>>
>>
>> On Mon, Nov 22, 2010 at 12:14 PM, Olli Pettay <Olli.Pettay@helsinki.fi 
>> > wrote:
>>> R24 is not quite unclear, IMO. The requirement and explanation  
>>> seem to
>>> talk about a bit different things.
>>> Yes, I think end user should be able to use speech in a hands-free
>>> mode, but "to speech-enable every aspect of a web application"?
>>> Not so sure. There are applications which will be difficult to fully
>>> speech-enable. For example some drawing app, which needs to  
>>> recognize
>>> touch/mouse pressure. Sure, user could say, "draw a pixel using
>>> pressure x to (1, 1), and then a pixel using pressure y to (2, 2)",
>>> but that wouldn't be quite practical.
>>>
>>> So, I'd say keep R24 (especially with wording "should" and not
>>> "must"), but clarify the explanation somehow.
>>>
>>> -Olli
>>>
>>>
>>> On 11/22/2010 10:08 AM, Dan Burnett wrote:
>>>>
>>>> Group,
>>>>
>>>> This is the next of the requirements to discuss and prioritize  
>>>> based
>>>> on our ranking approach [1].
>>>>
>>>> This email is the beginning of a thread for questions, discussion,
>>>> and opinions regarding our first draft of Requirement 24 [2].
>>>>
>>>> Please discuss via email as we agreed at the Lyon f2f meeting.
>>>> Outstanding points of contention will be discussed live at an
>>>> upcoming teleconference.
>>>>
>>>> -- dan
>>>>
>>>> [1]
>>>> http://lists.w3.org/Archives/Public/public-xg-htmlspeech/2010Oct/0024
>>>> .html
>>>> [2]
>>>>
>>>> http://lists.w3.org/Archives/Public/public-xg-htmlspeech/2010Oct/att-
>>>> 0001/speech.html#r24
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>>
>
>
>
> -- 
> Bjorn Bringert
> Google UK Limited, Registered Office: Belgrave House, 76 Buckingham
> Palace Road, London, SW1W 9TQ
> Registered in England Number: 3977902
>

Received on Thursday, 2 December 2010 13:38:58 UTC