W3C home > Mailing lists > Public > public-webrtc@w3.org > July 2011

Re: Mozilla/Cisco API Proposal

From: Ralph Giles <giles@thaumas.net>
Date: Thu, 14 Jul 2011 11:49:18 -0700
Message-ID: <CAEW_RktO=5D64ajwqHkWkNDapW1SOnuABTnnca7mNmYhEhyB8A@mail.gmail.com>
To: Ian Hickson <ian@hixie.ch>
Cc: "public-webrtc@w3.org" <public-webrtc@w3.org>
On 13 July 2011 18:33, Ian Hickson <ian@hixie.ch> wrote:

>> 1) How you want to hand music vs spoken voice is totally different. For
>> spoken voice, you want to filter out background noise, fan hum, etc
>> while for music you typically want to capture all the nuances.
>
> Agreed. Do we have any use cases that involve music?

Now that you mention it, the IETF rtcweb use cases and requirements
mention voice communication, but not music. A use case would be a link
between two locations participating in an event with a ambient music.
Or a teacher giving music lessons through a website.

I would say the primary characteristic of Real Time Communication is
*latency*, not quality. The various things which are done to enhance
fidelity of music or intelligibility of spoken conversation are
optimizations for particular environments, not essential components of
our design.

Rather the discussion here is because these are two optimization modes
we as implementers feel are worth describing in the specification of
the API. We recognize that voice-primary communication is a large use
case, and that there will be significant connections to phone-like
devices which can only support this mode. At the same time, one of our
technical achievements with the IETF codec working group has been to
finally bring streaming-quality audio to real time communication, so
the same system could be used for concerts, event broadcasts, and any
situation really where the recording environment supports higher
quality audio. We fear that we cannot reliably determine which of
these environments a user agent is participating in without context.

> I'm certainly open to better terms (front/back don't work; the front
> camera on a DSLR is very different than the front camera on a phone, hence
> the user/environment names). But this is a pretty important feature.

I'm now going to argue the other direction, which is that we don't
need to expose the camera choice in the web api. Most devices will
have only a handful of input options, and privacy requirements mean we
have to ask the user for approval before granting access, and the
platform may have additional sensors and preferences which affect
choices. For the current spec, I think it's better to leave it up to
the user agents to provide ui for this.

 -r
Received on Thursday, 14 July 2011 18:49:48 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 14 July 2011 18:49:49 GMT