Re: Requirement for UA / SS protocol

We already had a requirement that there must be a standard protocol.
If I understand this list correctly, it adds a number of requirements
on what features this standard protocol must support. I propose that
we consider each of the bullet points a separate requirement, so that
they can be discussed independently.

I think that most of them look fine. The only two that I'm not sure about are:

- web-app -> speech service events, with the same objection that Robert raised.

- Re-recognition using previous audio streams. What's the use case for this?


Also, I think that the following are already covered by existing requirements:

- "Both standard and extension parameters passed from the web app to
the speech service at the start of the interaction.  List of standard
parameters TBD."
  Covered by "FPR11. If the web apps specify speech services, it
should be possible to specify parameters."

- The speech service -> web app part of the birirectional events
requirement is covered by:

FPR21. The web app should be notified that capture starts.
FPR22. The web app should be notified that speech is considered to
have started for the purposes of recognition.
FPR23. The web app should be notified that speech is considered to
have ended for the purposes of recognition.
FPR24. The web app should be notified when recognition results are available.
FPR28. Speech recognition implementations should be allowed to fire
implementation specific events.
FPR29. Speech synthesis implementations should be allowed to fire
implementation specific events.

- "EMMA results passed from the SS to the web app.  The syntax of this
result is TBD (e.g. XML and/or JSON)."
Covered by:
FPR4. It should be possible for the web application to get the
recognition results in a standard format such as EMMA.

- "Interpretation over text."
Covered by (if I understand it correctly):
FPR2. Implementations must support the XML format of SRGS and must support SISR.


So, the remaining requirements from Milan's list that I support adding are:

* At least one standard audio codec.  UAs are permitted to advertise
alternate codecs at the start of the interaction and SSs are allowed
to select any such alternate (e.g. HTTP Accept).

* Transport layer security (e.g. HTTPS) if requested by the web app.

* Session identifier that could be used to form continuity across
multiple interactions (e.g. HTTP cookies).

/Bjorn

On Fri, Nov 19, 2010 at 1:49 AM, Robert Brown
<Robert.Brown@microsoft.com> wrote:
> I mostly agree.  But do we need bidirectional events?  I suspect all the
> interesting ones originate at the server: start-of-speech; hypothesis;
> partial result; warnings of noise, crosstalk, etc.  I’m trying to think why
> the server would care about events from the client, other than when the
> client is done sending audio (which it could do in response to a click or
> end-point detection).
>
>
>
> From: public-xg-htmlspeech-request@w3.org
> [mailto:public-xg-htmlspeech-request@w3.org] On Behalf Of Young, Milan
> Sent: Thursday, November 18, 2010 5:34 PM
> To: public-xg-htmlspeech@w3.org
> Subject: Requirement for UA / SS protocol
>
>
>
> Hello,
>
>
>
> On the Nov 18th conference, I volunteer to send out proposed wording for a
> new requirement:
>
>
>
>
>
>
>
> Summary - User agents and speech services are required to support at least
> one common protocol.
>
>
>
>
>
>
>
> Description - A common protocol will be defined as part of the final
> recommendation.  It will be built upon some TBD existing application layer
> protocol and include support for the following:
>
>
>
>   * Streaming audio data (e.g. HTTP 1.1 chunking).  This include both audio
> streamed from UA to SS during recognition and audio streamed from SS to UA
> during synthesis.
>
>
>
>   * Bidirectional events which can occur anytime during the interaction.
> These events could originate either within the web app (e.g. click) or the
> SS (e.g. start-of-speech or mark) and must be transmitted through the UA in
> a timely fashion.  The set of events include both standard events defined by
> the final recommendation and extension events.
>
>
>
>   * Both standard and extension parameters passed from the web app to the
> speech service at the start of the interaction.  List of standard parameters
> TBD.
>
>
>
>   * EMMA results passed from the SS to the web app.  The syntax of this
> result is TBD (e.g. XML and/or JSON).
>
>
>
>   * At least one standard audio codec.  UAs are permitted to advertise
> alternate codecs at the start of the interaction and SSs are allowed to
> select any such alternate (e.g. HTTP Accept).
>
>
>
>   * Transport layer security (e.g. HTTPS) if requested by the web app.
>
>
>
>   * Session identifier that could be used to form continuity across multiple
> interactions (e.g. HTTP cookies).
>
>
>
>   * Interpretation over text.
>
>
>
>   * Re-recognition using previous audio streams.
>
>
>
>
>
>
>
> Thank you
>
>



-- 
Bjorn Bringert
Google UK Limited, Registered Office: Belgrave House, 76 Buckingham
Palace Road, London, SW1W 9TQ
Registered in England Number: 3977902

Received on Friday, 19 November 2010 11:17:17 UTC