- From: Young, Milan <Milan.Young@nuance.com>
- Date: Thu, 13 Jan 2011 09:59:26 -0800
- To: <Olli@pettay.fi>, <public-xg-htmlspeech@w3.org>
Hello Olli, I'd be interested to know what sort of use case you have in mind that uses default speech services. I have been under the impression that most real world apps have grammars that are either too large or too sensitive to be transported over the network. Thanks -----Original Message----- From: public-xg-htmlspeech-request@w3.org [mailto:public-xg-htmlspeech-request@w3.org] On Behalf Of Olli Pettay Sent: Thursday, January 13, 2011 7:14 AM To: public-xg-htmlspeech@w3.org Subject: Some prioritization Hi all, I may not be able to attend conference call today (if we have such). But anyway, I started to prioritize requirements the way I think about them. Or more so, I picked up lower priority requirements and categorized them to 3 groups. I don't know how we're going to prioritize requirements, but I guess it doesn't harm to send this kind of email so that you know what kind of specification proposal I'm expected to see later this year. ------------- A bit lower priority: FPR46. Web apps should be able to specify which voice is used for TTS. FPR57. Web applications must be able to request recognition based on previously sent audio. ------------- Low priority: FPR28. Speech recognition implementations should be allowed to fire implementation specific events. FPR31. User agents and speech services may agree to use alternate protocols for communication. FPR48. Web application author must be able to specify a domain specific statistical language model. FPR56. Web applications must be able to request NL interpretation based only on text input (no audio sent). ------------- Something perhaps for V2 specification These requirements can be important, but to get at least something done soon we could perhaps leave these out from v1 specification. Note, v2 specification could be developed simultaneously with v1. FPR7. Web apps should be able to request speech service different from default. ...and because of that also the following requirements FPR11. If the web apps specify speech services, it should be possible to specify parameters. FPR12. Speech services that can be specified by web apps must include network speech services. FPR27. Speech recognition implementations should be allowed to add implementation specific information to speech recognition results. FPR30. Web applications must be allowed at least one form of communication with a particular speech service that is supported in all UAs FPR33. There should be at least one mandatory-to-support codec that isn't encumbered with IP issues and has sufficient fidelity & low bandwidth requirements. FPR55. Web application must be able to encrypt communications to remote speech service. FPR58. Web application and speech services must have a means of binding session information to communications. -Olli
Received on Thursday, 13 January 2011 18:00:00 UTC