- From: Gili <cowwoc@bbs.darktech.org>
- Date: Thu, 27 Jun 2013 10:19:52 -0400
- To: public-webrtc@w3.org
- Message-ID: <51CC4A08.1040903@bbs.darktech.org>
Hi,
(If you'd like to respond to individual points, please start a
separate topic)
I'd like to start a discussion of issues that came up during the
WebRTC World conference (in sessions and while speaking with Dan Burnett
and Cullen Jennings):
1. Ending the VP8/H264 war: A proposal was made to mandate a
patent-unencumbered codec (whose patents have expired or are not
enforced) as mandatory and optionally upgrade to other codecs such
as VP8 or H264 depending on peer capabilities and personal
preferences. VP8 guys can use VP8. H264 guys can use H264. And if
the two camps need to chat with each other they can fall back on
H263. This gives you the flexibility of arbitrary codecs without the
need to do transcoding.
2. The WebRTC API needs to focus on normal web developers, not not
telecom experts: The conversation on this mailing list is unduly
skewed in favor of telecom experts which make up a tiny minority of
WebRTC end-users. We need to find a way to collect feedback from the
Javascript community at large in order to ensure that the API
facilitates their use-cases. The proliferation of WebRTC SDKs for
end-users (the conference was full of them) is a strong indication
that there is a gap to be filled.
3. Implementers vs End-users: The specification document has two target
audiences, implementers and end-users. We need to provide
implementers with a lot of low-level detail but make as little
guarantees as possible to end-users to leave the door open to future
change (without breaking backwards compatibility). We discussed
explicitly marking-up sections of the specification "for
implementers" or "for end-users" or separating the specification
into separate documents. We need to make it clear, for example, that
the specification does not make any guarantees regarding the
contents of the SDP token. Implementers need a detailed breakdown in
order to implement WebRTC 1.0 but end-users may not rely on these
details because the token might not even be SDP in future versions.
4. SDP: Users should interact with the Constraints API instead of SDP.
It is true that there are some use-cases that are not yet covered by
this API (forcing you to manipulate the SDP directly) but the plan
is to address all these use-cases by 1.0 so users never have to
interact with SDP directly. "If your use-case is not covered by the
Constraints API, please tell us right away!"
5. Offer/Accept: There are plans to enable peers to query each other's
capabilities and change constraints (and as a result the
offer/answer) in mid-call.
6. Troubleshooting WebRTC: We need to do a better job diagnosing WebRTC
problems. We need a user-friendly application (run by
non-developers!) for quickly debugging network and microphone
problems (Skype does this), and allow users to drill down into more
detail if necessary. We also need programmatic access to this API so
WebRTC applications can detect problems at runtime and decide (for
example) to refund users who paid for a call that was subsequently
aborted due to network problems.
7. Use-cases, use-cases, use-cases: "Tell us what is wrong, not how to
fix it". You are a lot more likely to get traction for your problems
if you help us understand your use-cases then trying to argue for
change for its own sake. On the flip side for specification editors,
I encourage you to actively engage posters (ask for these use-cases)
instead of ignoring discussion threads ;)
I encourage other people who attended the conference to contribute
their own discussion points.
(If you'd like to respond to individual points, please start a
separate topic)
Thank you,
Gili
Received on Thursday, 27 June 2013 14:20:29 UTC