- From: Yosuke Funahashi <yosuke@funahashi.cc>
- Date: Sat, 1 Nov 2014 02:17:18 +0900
- To: public-web-and-tv IG <public-web-and-tv@w3.org>
- Message-ID: <CAFAsJP2i8A0kJrPOjaOWspurpbu_bhYk9kFU5ynkFZ+L6AwYNg@mail.gmail.com>
Hi IG members,
Please find the draft minutes of the IG face-to-face meeting on 27 October.
Minutes:
http://www.w3.org/2014/10/27-webtv-minutes.html
I've polished the draft minutes: fixed broken style and improved content by
correcting and adding text that scribes couldn't capture. The Present
section should be provided later.
Best regards,
Yosuke
---
[1]W3C
[1] http://www.w3.org/
- DRAFT -
Web&TV IG f2f meeting in Santa Clara
27 Oct 2014
[2]Agenda
[2] https://www.w3.org/2011/webtv/wiki/Face-to-face_meeting_during_TPAC_2014#Agenda_Monday_October_27
See also: [3]IRC log
[3] http://www.w3.org/2014/10/27-webtv-irc
Attendees
Present
Regrets
Chair
Yosuke, Mark, Giuseppe
Scribe
Kaz, Francois, Daniel, jcverdie, Alex, giuseppe, yosuke
Contents
* [4]Topics
1. [5]Welcome and Logistics
2. [6]IG Introduction - Giuseppe
3. [7]Agenda Bashing
4. [8]Updates from SDOs: Aligning Global and Regional
Web&TV Standards
5. [9]Joint Sessions with Other Groups: TV Control API
Community Group
6. [10]Joint Sessions with Other Groups: Second Screen
Working Group
7. [11]Joint Sessions with Other Groups: HTML WG
Accessibility TF
8. [12]New Contributions from IG members
9. [13]Update on HTML5 EME, MSE and in-band resource
issues
10. [14]The Second Round Usecases
11. [15]Wrap-up & Next Steps
* [16]Summary of Action Items
__________________________________________________________
<kaz> Scribe: Kaz
<scribe> scribenick: kaz
Welcome and Logistics
yosuke: good morning
... welcome to the meeting
... this is a one-day meeting
... including some joint meetings
... if you have any ideas for today's meeting, you can put them
... myself has changed my affiliation to the W3C Team
... Kaz is taking notes
... would start with brief introduction
ddavis: Daniel Davis from W3C/Keio
... Team Contact
jfoliot: John Foliot
raphael: chair of media fragments
skim13: Sung-Hei
<scribe> (continues...)
yosuke:We need volunteers for scribing.
tidoust: volunteers to scribe for the second session
ddavis: for 11:00-13:00
yosuke: need 4 more scribes
alex: for 14:00-15:00
jcverdie: for 15-15:30
yosuke: for 17-18
... tx all
IG Introduction - Giuseppe
(slides to be added)
giuseppep: [Goals]
... look at not specs but big pictures of requirements
... media-centric apps
... identify gaps
... review of deliverables of other groups
... give feedback
... also liaison with other organizations
... [Web and TV IG]
... IG != WG
... link to resources
... wiki and home page
... phone calls and email discussions
... [Web and TV Interest Group Work Flow]
... figures out our work
... extract requirements from use cases
... and then gap analysis
... highlight real gaps
... with the existing specs
... sometimes need new APIs
... or sometimes need a new area
... may create a new Community Group
... [IG work overview]
... Closed TFs: Home Network, Media Pipeline, Testing, TImed
Text
Giuseppe: [Home Network TF (closed)]
... contributed to the DAP WG
... privacy, security, etc.
... [Media Pipeline TF (closed)]
... adaptive streaming and content protection
... Gap analysis went to HTML5
... HTML5 getting a REC tomorrow
... [Testing TF (closed)]
... open web platform testing
yosuke: We had an initial report
... of testing from W3C at the Boston AC
... meeting this year: Some part of the activity worked
... really fine and we learned lessons as well.
... With continuing successufl part, currently
... thinking about next plan
giuseppep: [Timed Text TF (closed)]
... TTML vs WebVTT
... asked W3C to create only one format
... also mapping from one to another
... accepted by the TTWG
... [Media APIs TF (ongoing)]
... previous work: metadata, etc.
... current work: requirements for audio
fingerprinting/watermarking
yosuke: comments/questions?
(nothing)
Agenda Bashing
mark: agenda posted for today
... want to make sure it works for everybody
... a lot are covered
... first, some work on work with other SDOs
... talk about what they're working on
... then
... break and
... a couple of joint sessions with other W3C groups
... TV Control API CG spun out from the IG
... Second Screen Presentation CG
... HTML WG Accessibility TF
... then lunch
... and back to the work of the IG: new area
... SMPTE and GGIE work
... and update report of key features, e.g., HTML5 EME, MSE and
in-band resource issues
... related the work in the IG
... afternoon break
... and back to new use cases of the IG
... then wrap-up and next steps
... anything to add?
chris: is 15min enough for second screen?
mark: the first session with the TV Control API CG is
discussion
... but second screen/accessibility should be brief update
louay: 15 min presentation and ... 10 min Q&A seems fine.
jfoliot: regarding the accessibility tf
... brief update today, and another joint session on Thursday
mark: will add 10 more mins for second screen
... we can be flexible
Updates from SDOs
<tidoust> scribe: Francois
<tidoust> scribenick: tidoust
Giridhar_Mandyam: [ATSC 3.0 update]
... working from Qualcomm, update for ATSC perspective
... 3 years ago, ATSC board approved work on ATSC 3.0
... no requirement to be backwards-compatible
... Redesign of the ATSC transmission top to bottom. Looking at
physical layer, codecs. Another major change is looking at new
transport layer, investigating alternatives to MPEG2-TS.
... One thing that is most important to ATSC for CE
manufacturers and broadcasters is hybrid services.
... Organization of ATSC 3.0 in different groups, inclusing S31
(requirements), S32 (physical layer), S33, and S34 (apps and
presentation) which is the most relevant for this group.
... The process flow: we started with a list of about 150
requirements. Some of them overlapped, of course.
... We created a bunch of sub-groups, e.g. audio.
... We decided to go to IP transport instead of MPEG2-TS. Two
proposals have come in for broadcast purpose: ROUTE/DASH, using
a FLUTE derivative, and MMT/DASH, using an MPEG standard.
... Both alternatives are supposedly compatible with DASH.
... They both leverage ISO BMPFF file container. A few
differences, compatible with W3C MSE in theory.
... It very much looks like MPEG DASH will be the delivery
mechanism for broadband.
... DASH.js open-source client will be compatible with these
proposals, which is good news for W3C, since that leverages
many Web technologies
... Hybrid content is also useful for supplemental content,
including non real-time content.
... Getting to the runtime environment: initial decision to
leverage HbbTV 2.0 as basis. Some possibility to constrain or
extend that.
... Some liaison with HbbTV. HbbTV 2.0 is still MPEG2-TS
centric.
... Input from Web and TV IG is being considered. Thanks for
putting together that liaison letter!
... Some other areas related to W3C: program announcement and
service guide to leverage OMA BCAST 1.0.
... Watermarking is in-band transmission of information that
doesn't disturb the audio/video streams, but enable the client
to extract that data and provide additional services based on
that.
... No decision as to how to do that in a Web-based runtime
environment yet.
... We've also been looking at emergency alerts, no decision
yet either.
Dong-Young:Watermarking, is it proprietary or standardized?
Giridhar_Mandyam: A bit of both. Some parts may be
standardized, others won't be because companies have their core
business based on that. ... Future will tell how much can be
standardized.
Mark_Vickers: [DLNA update]
... DLNA has been working on CVP2 for some time, now renamed to
VidiPath. Presenting slides from Amol Bhagwat, from CableLabs
and DLNA board.
... Works with any service provider for video. If the service
provider has some gateway (e.g. set-top box), the idea is to
use that gateway to connect all different devices available and
deliver the video to all users devices.
... If TV supports this, the TV will do discovery across the
network, and discovers which gateway(s) it can use.
... The Gateway gives the TV a URL to the program that it can
interact with. The guide is HTML5. At this point on, the TV is
running an HTML5 application.
... As opposed to the approach taken in the Home Networking TF,
the TV does discovery initially, and then just runs HTML5
content.
... Requirements are diverse, 2 sets of requirements in
practice: Application model is fully based on W3C, IETF, and
related Web standards.
... We believe that the TV Web should be part of the Web. No
non-standard Web API in the spec!
... Then there are a lot of requirements on the user agent,
including support for MPEG-DASH, media formats, discovery
protocols.
... All these things are implemented by the underlying user
agent.
... There's a 1.1 version that is coming along for cloud
guidelines, to be completed next month, on DRM and MSE, as
these things are becoming more stabilized.
... A certification program was launched this September.
... Certified products expected to hit the market in December.
... CableLabs is doing a lot of work towards certification. In
addition to that, they are also providing a full open source
reference software stack.
... Now, looking at it from a Comcast perspective: From 2011
Web and TV workshop keynote, there were a lot of efforts
... done to embrace and extend existing specs.[Showing the
list]
UNKNOWN_SPEAKER: Too many of them.
Mark_Vickers: HTML5 came along and did not use any of them.
... We thought it wise to abandon all these references and
fully embrace the Web platform.
... Comcast efforts are to use HTML5 as defined by W3C, without
extension. For instance, CEA-2014 was in the first DLNA RUI,
but is not in VidiPath, to ensure that people would stop using
it.
... If we find new APIs that are missing, we push for the work
to be addressed here. ... Ongoing works on reference design kit
(RDK).
ddorwin: Is it a goal that browsers can run VidiPath?
Mark: Browsers can decide to go through certification process.
Yes. We want to make these things available to browsers.
Giridhar_Mandyam: Lots of discussion on Network Service
Discovery API within this work. Assuming it's not part of RDK
since Chromium expressed concerns on privacy/security issues.
Your point of view?
Mark_Vickers: APIs that expose servers in the local network for
me is a problem. Every company in the world can offer Web
services on your local network, assuming that the home network
is somewhat safe, because it is protected by a firewall
... One way of exposing network services for me is to do it at
the chrome level. That's safer.
... Same as "window.print". The app does not even know if
you're going to use a real printer in the end, how many
printers there are in your network.
... Another example is photographs, using a dedicated DLNA
server. If you give permissions to an external site to your
DLNA server, you have no way to ensure that the external site
will not pull out all the content.
... I feel it's too dangerous.
Jean-Claude_Dufourd: So, basically, we're looking at an API
that says: "give me a photo server" and you get a pointer to it
without knowing anything about its location or inner
properties.
<ddorwin> The statement "We want to make these things available
to browsers." was actually Comcast's position on the content,
not necessarily via VidiPath.
Jean-Claude_Dufourd: I hear you want something like the
Presentation API.
Mark_Vickers: Yes, I think, that's a very good example.
<ddorwin> There are a variety of reasons that it's very
unlikely browsers would support VidiPath.
Alex: Is VidiPath constrained to the home network?
Mark_Vickers: DLNA has always been scoped to home networks, so
that's the main focus, yes. But you can imagine using the
cloud. I don't know if the organization is going to define that
part though.
<ddorwin> Providers supporting VidiPath will likely also need a
pure W3C-based solution to reach browsers on both desktop and
mobile.
Giuseppe_Pascale: [HbbTV update]
... HbbTV stands for Hybrid Broadcast Broadband TV.
... Started as a French/Germany initiative but is starting to
be adopted in many different countries.
... Most EU countries are using, trying or considering using
HbbTV. Also deployed in Australia, announced in Russia, etc.
... Looking at the specification, it's really a list of
references, from ISO, OIPF, and W3C of course.
... In HbbTV2, main new technology is the update of the Web
profile used within the specification, based on HTML5, reusing
the profile defined by the Open IPTV Forum. Profile just means
that there is a minimum subset of specs that need to be
supported by all HbbTV devices.
... The only legacy left from CEA-2014 is around the video. The
<video> element from HTML5 can be used, but the old <object>
API is still around, partly for backward-compatibility and for
broadcast video.
... Other areas that are specific in HbbTV include "DRM in a
CAM (Conditional Access Module)" through DVB CI Plus 1.4, HEVC,
subtitles using TTML as profiled by the EBU, recommandations to
handle privacy (Do Not Track support), and multi-stream
synchronization where specific APIs were defined.
... Companion screens scenarios are supported: partially
proprietary and partially standardized. Addresses ways for the
TV to launch an app on a companion device and the opposite way
around.
... App2App communication is a way for two apps to talk to each
other on two different devices. A WebSocket server is running
on the TV. Used as a bridge between the apps.
... To launch an app remotely on the TV, the DIAL protocol
(Netflix/Youtube) is used. There have been lots of discussions
on security and privacy considerations.
... Another major area of development is synchronization of
applications and content across devices. Use cases include
sign-language or second camera within the same device or across
devices.
... Finally solution leverages DVB protocols, withing the
TM-CSS group, using the App2App mechanism to signal
synchronization mechanism.
... Main improvements on MPEG DASH. Some problem with Advert
insertion into VoD content, using two HTML5 media elements with
additional requirements for smooth transition between them.
... Also Push VoD.
... Some minor improvements on a few other areas.
... A strong emphasis on testing. The HbbTV v2.0 spec is
basically ready but won't be published before a list of
assertions is assembled.
... In v1, the attempt to get members to contribute tests did
not work out so well. In v2, money from the HbbTV Association
is being used to invite members to contribute tests.
... As said, the spec is ready, just missing a few test
assertions.
... No major work on next version for the time being, as focus
in on v2. One area of research is focus on IPTV.
... HbbTV is not going to do an "HbbTV IPTV profile" but that's
not needed.
... An area of interest is Independent Application Signaling.
Some networks filter the signaling. There are some discussions
on how to work around that. Watermarking and fingerprinting
could be a solution. That's one possible way of achieving this.
Giridhar_Mandyam: Going back to DRM on CAM slide. This was a
little confusing for me. HbbTV claims to be compatible with
HTML5 video but that does not seem to be compatible with the
way DRM works.
Gisueppe_Pascale: Not my area of expertise. It's leveraging
previous work in OIPF.
Giridhar_Mandyam: So my point is that it's not entirely
compatible with HTML5 MSE.
Giuseppe_Pascale: Right. When this was specified, HTML5 MSE and
EME were not stable enough. Might change in the future.
Mark_Vickers: General issue for SDOs on how to reference
on-going W3C specs. We had to go to the board of DLNA to make
an exception to be able to reference HTML5 since it's only
becoming final now.
... Implementations are key here.
... Otherwise you just end up being too far behind.
Giuseppe_Pascale: Yes. It's a bit related to certification as
well.
Mark_Vickers: For certification, we're leveraging as much as
possible tests from existing W3C test suites. They are not
enough from a certification perspective, but usually good.
Dong-Young: Support for MSE?
Giuseppe_Pascale: MSE is not part of HbbTV v2. For ad
insertion, HbbTV relies on two video elements, used in
sequence. All under the control of the application. Not a
specific API per se, just guidelines to do it.
Jean-Claude_Dufourd: It seems the HbbTV Association does not
trust discovery. The initial pairing of device is entirely
proprietary. A bit weird from my perspective.
... Before the TV knows your tablet, you have to have some
proprietary stuff going on to pair the devices.
Giuseppe_Pascale: Right, but needed for security reasons.
Jean-Claude_Dufourd: Also, companion screens is optional, kind
of weird given the hot focus on that.
Giuseppe_Pascale: Yes, that was a compromise.
Yosuke_Funahashi: Any description about buffer size to enable
synchronization between broadcasting signals and content from
the Internet?
Giuseppe_Pascale: I'm not so familiar on that topic.
Yosuke_Funahashi: Any liaison between HbbTV and DLNA on
certification?
Giuseppe_Pascale: There are conversations going on, trying to
see if joint work can be possible. Nothing to report at this
stage.
Hisayuki_Ohmata: [Hybridcast update]
... Hybridcast is a platform for broadcast and broadband hybrid
service. It has been already launched in Japan.
... Talking about updates from TPAC last year: only NHK
provided Hybridcast services, then trial services by 16
commercial broadcasters and, recently, TBS (Tokyo Broadcasting
system) launched.
... Specification is now version 2.0
... In version 2.0, three major topics to enhance broadcasters'
service: time-shifting viewing, supporting the video element
for VoD with associated APIs for PVR, then companion device
interface and accurate synchronized presentation.
... Also expansion of platform for service providers.
... Going into more details on each topic.
... For time-shifted viewing: MSE and EME are supported, video
content of MP4 and MPEG-DASH can be played. APIs for playback
control are also defined.
... For companion device interface: App to app communication,
using native app and HTML5 app interfaces are defined.
... For synchronized presentation: one of the important
factors, we think. [showing a video demo]
... IPTV Forum Japan defines the architecture model for
synchronization and also APIs for controlling synchronized
playback and exposing the broadcast timestamps.
... There are two types of Hybridcast applications:
broadcast-oriented app and non broadcast-oriented app.
... The first ones are launched from the broadcast signal. The
second ones are typically launched by the user.
... From the standpoint of services, non broadcast-oriented
apps can
... provide cross-channel services, but not broadcast apps.
... Different considerations for security and content
integrity: apps are executable under the permission of
broadcasters and they can control API access.
Louay_Bassbouss: What do you mean by multiple apps may run at
the same time?
Hisayuki_Ohmata: Several apps can run on the same screen at the
same time, meaning multiple widgets can run in parallel.
[multiple browsing contexts]
Mark_Vickers: Could you make available the extension APIs that
you came up with? Send them to the IG so that we can look at
them for instance?
Hisayuki_Ohmata: Version 1 is published. Version 2 has not been
translated to English yet.
Mark_Vickers: It would be useful to see the extension APIs, not
the common ones. It would be good to know those in these
groups, as they could be used to extract requirements for
standardization of new APIs.
... I'd like to make the request.
Hisayuki_Ohmata: I'll see what I can do.
Yosuke_Funahashi: Version 2 is published, the English
translation is missing, it should be doable.
Donghoon_Lee: [TTAK.KO-07.0111/R1 "HTML5 Based Smart TV
Platform" update]
... Working TTAK Technology of Telecommunications in Korea. ICT
development, Certification and testing services in Korea.
... HTML5 based smart TV Platform is a specification for
running Smart TV app, based on HTML5. No dependency on
broadcasting system.
... The goal is to promote Smart TV Echo platform system.
... TV and set-top box devices can receive broadcast signals,
and non linear streams and data through broadband.
... Looking at the architecture, the HTML5 based smart TV
platform is built on top of TV operation system, split into app
specific functions, Web core, and device specific
functionalities.
... JavaScript-based APIs.
... The work on the spec started in March 2013 and first spec
was published in March 2013. In April 2014, a second spec wxas
published (revision 1) that adds second screen, adaptive
streaming in particular.
... Current work to enable new features such as payment, second
screen and content sync.
... Now looking at the application model.
... Four criteria to classify Smart TV application: 1)
execution method (whether it comes from the Store, started by
broadcasters signaling or launched by user from the broadband)
... 2) app packaging (packaged or not).
... 3) Broadcasting relation (activated or inactivated). Some
apps may be suspended.
... 4) Channel Bound, depending on what happens when user
switches to another channel.
... On the app lifecycle, an app can be launched by the user,
an API call, or can be automatically started. An app can be
terminated when the channel changes, through an API call, etc.
... Some W3C specification are not suitable to TV environment,
so profiling has to be done. Based on OIPF profile.
... In addition, some Korean requirements on remote controller
for virtual KeyCode definition. Geolocation API is included,
media APIs profile for broadcast video.
... Extended API: such as access to broadcasting resources.
These specs are not defined by W3C, so have to do it ourselves.
... [showing extended API structure tree]
... The extended API are exposed on the "window" object through
"window.tvExt". Different interfaces.
... The app interface to control the current application, and
those that are installed on the device.
... The broadcast interface to get info about TV channel and
program. Can give the URL, name, description, and different
info on program.
... The device interface to get detailed info on the device
itself, including make and model.
... The Multiscreen & DRM interfaces are optional in this
specification.
... No extended API to control broadcast video and channel
operation, instead we adopted HTML5 video element. We defined
new semantic requirements when used for broadcasting video and
channel.
... Two states for broadcasting video: binding state and
unbinding state. Setting/Resetting the src attribute on the
video switches from one state to the other.
... In the binding state, video is controlled by the
application, controlled by TV device otherwise.
... [presenting a code example with HTML5 and JavaScript code]
... To support app signaling, the spec references the DVB
signal spec.
... On app packaging, the spec defers to W3C packaging format,
introducing new URI scheme "tvapp", and adding the "apptype"
and "appdomain" elements.
... [presenting demos]
... [First demo shows application lifecycle]
... [Second demo shows an example of multiscreen service
application]
... There's another specification for conformance tests for
HTML5 based Smart TV platform.
... It defines a test environment and test case.
... We also developed test dev framework, shared with everyone.
... I presented the Smart TV Platform, and certification
program. Thank you very much.
<Zakim> kaz, you wanted to ask how to use geolocation api and
to ask about the notation for sourceURI and to ask if these
APIs are already available LG/Samsung TVs
Kaz_Ashimura: A few questions. You mentioned Geolocation API.
How do you use that API?
Donghoon_Lee: W3C Profile for Smart TV, subset of W3C
specifications.
Yosuke_Funahashi: Ok, but what's the use case?
Donghoon_Lee: In case of Smart TV, it detects the location
based on the gateway, and looking for some nearby stores for
instance.
Kaz_Ashimura: Thank you. You showed some notation about
sourceURI. Was wondering about that.
Donghoon_Lee: Unique URL to identify the channel.
... Unique identifier for the channel.
Kaz_Ashimura: Is this spec already implemented in LG or other
TV sets?
Donghoon_Lee: Currently, in case of Samsung and LG, own private
applications.
... Broadcasters preparing to start these services based on
this specification.
[break]
yosuke: We'd like to move second screen and accessibility media
TF topics to after lunch.
TV Control API Community Group
<ddavis> scribenick: ddavis
<scribe> scribe: Daniel
yosuke: There are many efforts in W3C for convergence of TV and
web standards.
... One result of that is the TV Control API Community Group.
The chairs for that are Bin Hu and JC Verdie
<raphael> Will all presentations be linked from
[17]http://www.w3.org/2011/webtv/wiki/Face-to-face_meeting_duri
ng_TPAC_2014?
[17] http://www.w3.org/2011/webtv/wiki/Face-to-face_meeting_during_TPAC_2014?
Bin: The TV Control API CG was spun off from the TV IG based on
gap analysis work last year.
... We're started the group work in May to look into
controlling TV tuners.
... We've done some use cases, requirements and gap analysis.
... We're looking at TV control APIs from Mozilla, Webinos,
other SDOs as well as Samsung and LG.
... We want to create a candidate baseline and then further
that work for a specification.
... Are there any suggestions for other agenda items?
(nothing)
Bin:First, let's review action items:
[18]http://www.w3.org/community/tvapi/track/actions/open
... Action 11 for yosuke is closed (
[19]http://www.w3.org/community/tvapi/track/actions/11 )
... Action 14 for kaz is closed (
[20]http://www.w3.org/community/tvapi/track/actions/14 )
... Action 16 is closed (
[21]http://www.w3.org/community/tvapi/track/actions/16 )
... Action 17 is closed (
[22]http://www.w3.org/community/tvapi/track/actions/17 )
... Next is an overview of specs in Japan.
[18] http://www.w3.org/community/tvapi/track/actions/open
[19] http://www.w3.org/community/tvapi/track/actions/11
[20] http://www.w3.org/community/tvapi/track/actions/14
[21] http://www.w3.org/community/tvapi/track/actions/16
[22] http://www.w3.org/community/tvapi/track/actions/17
tomoyuki: Here's an overview of Cable STB APIs in Japan
... In Japan there are three kinds of broadcast service types:
terrestrial, satellite and cable
... TVs in Japan can playback terrestrial and satellite without
STBs but for cable a STB is needed.
... In Japanese broadcast standards, there are two types of
application runtime.
... One is BML (a subset of HTML 4.01), the other is a
cable-specific STB API.
... Our cable-specific API is limited to additional definitions
related to ARIB
... Three years ago, Japan Cable Labs (JLabs) defined this
specification
... Two years ago, it was standardized as an ITU-T
Recommendation J.296
... One is a JavaScript API, the other is a Java API (mostly
ported from JavaScript API)
... It's mostly equivalent to OIPF-DAE
... You can obtain program information, recording, tuning, etc.
... One extension is to support Japanese broadcasting standards
such as ARIB and JLabs cable standard.
... The Java API is almost completed imported from DAE with
almost the same functionality as OIPF-DAE.
... The main difference is Android-specific support - intent
support and search using a content provider.
... The spec was defined three years ago so the current spec is
different to the current OIPF-DAE v. 2.3
... Our spec is based on OIPF-DAE v. 2.1
... We don't have an update yet because we have to watch the
trend of common APIs for broadcasting.
... This is why we joined these Web and TV activities.
Bin: Thanks to Tomoyuki.
... Do you think the spectrum of those specs can fit some of
the work we're doing in the TV Control API CG?
... Maybe you can tell us later when we look at the gap
analysis. ... Mapping page:
[23]https://www.w3.org/community/tvapi/wiki/Main_Page/Requireme
nts_Mapping
[23] https://www.w3.org/community/tvapi/wiki/Main_Page/Requirements_Mapping
ddavis: We'll also add the public TV API from Samsung once it's
been updated soon. ... TV Control API CG wiki page:
[24]https://www.w3.org/community/tvapi/wiki/Main_Page
[24] https://www.w3.org/community/tvapi/wiki/Main_Page
Bin: We have about 80 technical requirements in the wiki page.
... The work now is to identify which features are used and
which functions/classes are used by other APIs.
... We can then see which functions are the best fit for a
baseline spec.
... There are general tuner requirements.
Tuner requirements:
[25]https://www.w3.org/community/tvapi/wiki/Main_Page/Technical
_Requirement#General_Tuner_Requirements
[25] https://www.w3.org/community/tvapi/wiki/Main_Page/Technical_Requirement#General_Tuner_Requirements
[Bin is reading the tuner requirements]
Bin: The Mozilla API supports most of the requirements.
giuseppe: What was the goal of this analysis?
Bin: To look at what other specs are and to establish a
baseline spec for this group.
... For areas that are not supported in other groups, we can
fill those gaps.
Mark_Vickers: Is it the goal to support every one of those
pieces of functionality?
Bin: Yes, that's the goal.
Mark_Vickers: My sense is that this is problematic. We should
be developing APIs that work across all platforms - PCs,
mobile, whatever.
... E.g. a webcam sending out an IP stream should be dealt with
them in the same way.
... The current analysis here seems very hardware-specific.
... It doesn't seem to be bringing the TV world into an IP
world. It seems to be creating a tuner API for the old world
and excludes modern channels.
Bin: Not all of this is hardware-specific.
Mark_Vickers: If you look at a webcam, it has an IP address on
the internet. There should be a unified model for dealing with
traditional and non-traditional channels.
giuseppe: We want to reduce what's in HTML5. We should start
from HTML5 and see what's the delta that's missing, instead of
trying to adapt an old API.
gmandyam: Have you looked through the individual APIs and
decided what should be exposed and what should be left as
proprietary APIs?
Bin: What we're looking at is from the TV industry perspective.
... and looking at what will be run in the web environment.
gmandyam: For example, looking at the Mozilla API, there's an
unlock parental control feature.
... From ATSC point of view, we would not want to expose that
for developers to control.
... So what should we expose to developers?
Bin: The requirement now is that input from the TV industry is
that that is a requirement and we didn't have any disagreements
about that.
... Maybe we should go back and revise the requirements.
yosuke: Have you looked at the API from the BBC, I mean, TAL,
TV Abstraction Layer?
Bin: No
yosuke: BBC created a library to abstract the API over SmartTV
APIs from various CD manufactures
... - it's available on GitHub and so has become a kind of
standard.
... [FYI: TAL:
[26]http://fmtvp.github.io/tal/getting-started/introducing-tal.
html]
[26] http://fmtvp.github.io/tal/getting-started/introducing-tal.html
jcverdie: Last time I heard it was more of a prototype and at a
higher level, like an application facility more than solving
middleware control like we're trying to achieve here.
Chris_Needham: The TAL is at a higher level than this really.
It provides a set of widgets and components .
... You let the TV app layer generate the markup for you - UI
presentation.
giuseppe: The starting point was using the web to control the
TV. If you look at apps running on a channel the requirements
could be different.
... Maybe grouping them all together could create confusion.
Bin: The angle of this work is from a Web OS perspective.
... The application perspective would use those APIs as well as
presentation features.
... It's a good idea to investigate it from a different
perspective.
... It would be helpful if you could follow the work and we
could open the work on the requirements.
... So we could have two perspectives - one is for
applications, one is for the platform.
... If the angle is a platform/OS one then we should highlight
it.
... So I'll highlight that.
<scribe> ACTION: Bin to update the gap analysis to indicate
what is at an OS level. [recorded in
[27]http://www.w3.org/2014/10/27-webtv-minutes.html#action01]
<trackbot> Created ACTION-210 - Update the gap analysis to
indicate what is at an os level. [on Bin Hu - due 2014-11-03].
Dong-Young: Regarding the system-level APIs, they'll be used by
device makers themselves so they don't really need to be open
to developers.
... They'll be interested more in application-level APIs so it
would be more valuable to concentrate on thoses.
Bin: I think standardizing developer APIs would ultimately
benefit the consumers.
Dong-Young: I'm not saying we shouldn't standardize
system-level APIs, I'm saying we should prioritize
application-level APIs.
Bin: Thank you. If you're able to look through the APIs on the
wiki page, it would be helpful if you could highlight what you
think is higher priority.
... I would encourage people to edit the mapping wiki page (
[28]https://www.w3.org/community/tvapi/wiki/Main_Page/Requireme
nts_Mapping )
... Please add enhancements and things like priorities.
... The group has a conference call every four weeks which
we'll continue.
[28] https://www.w3.org/community/tvapi/wiki/Main_Page/Requirements_Mapping
ddavis: Is the plan to look at requirements again?
Bin: We're going to concentrate on the mapping analysis but
people can contribute to the requirements.
ddavis: If there are people here from organizations that have a
TV API, we'd love to be able to use them for reference.
<tidoust> [Mark mentioned the possibility to treat channels as
a "playlist" of live streams. That reminds me of abandoned work
on a Gallery API worked upon by the DAP WG: See
[29]http://dev.w3.org/2009/dap/gallery/ ]
[29] http://dev.w3.org/2009/dap/gallery/
ddavis: Please either add your API details to the wiki or just
let us know the URL and someone from the group will do the
mapping.
Bin: Thanks everyone and to the chairs for letting me speak.
Second Screen Working Group
Second Screen Community Group website:
[30]http://www.w3.org/community/webscreens/
[30] http://www.w3.org/community/webscreens/
Second Screen Presentation API:
[31]http://www.w3.org/2014/secondscreen/presentation-api/201407
21/
[31] http://www.w3.org/2014/secondscreen/presentation-api/20140721/
Oops - this one is more recent...
Second Screen proposed API:
[32]https://webscreens.github.io/presentation-api/
[32] https://webscreens.github.io/presentation-api/
Louay_Bassbouss: I'm going to show this using the second screen
API itself.
... I have the slides in a browser on my mobile device
... If I click a button, I get a dialog asking which
application I want to present.
... I can then send the slides to the presentation device.
... So this is an actual use case of this API.
... We have about 62 participants in the second screen
presentation community group.
... There are several technologies to enable second screens.
... The goal for us is to show web content on a second screen.
... Web content means not just web pages but also video, etc.
... In this example we've used a mobile device which is the
controller page.
... It uses the requestSession(url) function to send content to
the second screen (presenting page).
... The user agent allows the user to select the second
display.
... User interaction is always required for security.
... There are many technologies that can be used for this -
DLNA, Mifi, Miracast, etc.
... The difference between some of these technologies is that
sometimes you don't have two user agents - maybe just one with
frames sent to the second display.
... The Presentation API abstracts from these technologies.
... Gaming and media flinging are other use cases.
... The current status - in July we published the final report.
... This month the Second Screen Presentation Working Group was
created.
... We currently have 19 participants from 10 organizations.
... Group chair is Anssi from Intel.
... One function requests a session between the controlling
page and presenting page.
... We also have a function to let the controlling page know
about the existence of secondary displays.
... It says whether something is available but not how many.
... If the controlling page requests a YouTube application and
you have a TV with a YouTube application, DIAL can be used but
some sort of URL mapping is needed for the application to
launch.
... This is in discussion within the group.
... The focus initially is just on web content.
... Out of scope things include lower level APIs to discover
and communicate between devices.
[Louay is showing a preview of the API code]
Louay_Bassbouss: requestSession starts a connection/session.
... onpresent is a listener on the presenting page.
... postMessage and onmessage are used as in web messaging,
with the addition of a close() function.
[Louay is showing an example of the API in use]
Louay_Bassbouss: In the working group there's an update on this
function - startSession and joinSession
... This is to allow for sessions that have been started and
already exist.
... In the presenting page there's a navigator.presentation
global object.
... and a navigator.presentation.session object
... You can use this API to launch apps on a TV and to trigger
devices to start from a TV.
... For implementations, there are two possibilities - single
user agent (e.g. WiDi, Miracast) and two user agent (e.g.
Chromecast)
... We have a FAMIUM prototype implementation made by
Fraunhofer FOKUS.
... Another implementation is a Cordova plugin.
... There are demos and videos on YouTube.
... So that's the current status. Please let me know if you
have questions.
Oliver: You talked about the abstraction of TV technologies
that are already available. Do you have any more details?
Louay_Bassbouss: You can use these technologies without any
extension but it depends on what the browser supports.
... E.g. Google is planning to support this in Chrome and
they'll support Chromecast only at first.
... But maybe e.g. Firefox will support Miracast.
... So you could have the same application in both browsers but
the different browsers would have access to different devices.
Oliver: So providers will have to be careful about which
browsers can be used.
Louay_Bassbouss: Yes.
tidoust: It's a complex topic and an interoperability issue.
We'd like browsers to support different protocols.
... As you mentioned, the working group has just been started.
Thank you everyone who supported its creation.
... Please join the WG if you're interested. We're in the
process of setting it up. Ask your AC rep to add you.
<tidoust> [To join the Second Screen Presentation Working Group
(member-only link):
[33]https://www.w3.org/2004/01/pp-impl/74168/join ]
[33] https://www.w3.org/2004/01/pp-impl/74168/join
Igarashi: On Wednesday we're having a breakout session about
how to use this API with web of things devices.
Louay_Bassbouss: That discussion could also address things such
as playing audio on your home system.
... Also sensors is something to consider.
<kaz> [34]Igarashi's breakout session proposal
[34] https://www.w3.org/wiki/TPAC2014/SessionIdeas#startSession.28.22WoT_devices.22.29
Louay_Bassbouss: The API can be a good starting point for that.
Mark_Watson: We're interested in using this API for sending
content to e.g. a Netflix app.
... We're hopeful this API can integrate with the DIAL
protocol.
<Louay> Presentation API Slides
[35]http://famium.fokus.fraunhofer.de/webscreens/apps/slides/pr
esentations/presentation-api-tpac2014/index.html
[35] http://famium.fokus.fraunhofer.de/webscreens/apps/slides/presentations/presentation-api-tpac2014/index.html
HTML WG Accessibility TF
<JF> www.w3.org/WAI/PF/media-a11y-reqs/
<JF>
[36]http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Chec
klist
[36] http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Checklist
JF: I'm John Foliot, a member of the accessibility initiative
at W3C.
... Several years ago we looked at accessiblity requirements
for media.
... [[37]http://www.w3.org/TR/media-accessibility-reqs/]
... We have the spec here and a simpler version as a checklist.
...
[[38]http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Che
cklist]
... The goal is to get this published as a note by the end of
this year so if you have comments, please let us know.
... The document is split into two things - creation of
accessible content (e.g. captions, transcripts, described
audio/video), and system requirements (e.g. tools for playback)
... As we went through all the requirements, we tried to map it
to existing W3C accessibility publications.
... E.g. we have WCAG (Web Content Accessibility Guidelines)
and UAAG (User Agent Accessibility Guidelines).
... We've tried to provide this as a resource for different
user groups.
... When you're creating content, make sure you include e.g.
captions
[37] http://www.w3.org/TR/media-accessibility-reqs/
[38] http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Checklist
<kaz> [39]WCAG (Web Content Accessibility Guidelines)
[39] http://www.w3.org/TR/WCAG20/
JF: And when you create a system to enable playback, make sure
you provide the ability to control it e.g. without a mouse.
<kaz> [40]UAAG (User Agent Accessibility Guidelines)
[40] http://www.w3.org/TR/UAAG20/
JF: We map to WCAG which has become a legal standard in many
countries around the world.
... We've requested feedback. In this room, you might
especially be interested in the system requirements
information.
JF: It's also relevant for second screen - sign language on a
separate device is a use case.
... We're meeting on Thursday in the morning from 9:00am if
you'd like to join us.
Mark_Sadecki: What we'd like from this group is that if there's
advancement in tests or prototypes, we'd love access to that so
we can look into accessibility.
... Please join the meeting on Thursday to discuss it or ask
questions.
The media requirements document:
www.w3.org/WAI/PF/media-a11y-reqs/
The media accessibility checklist:
[41]http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Chec
klist
[41] http://www.w3.org/WAI/PF/HTML/wiki/Media_Accessibility_Checklist
Mark: Is there any testing efforts here?
Mark_Vickers: There's no testing framework being created now
and the testing task force has finished.
JF: I'm watching discussions around EME. Ancillary content is
important - if the media is encrypted, what does the transcript
(or clean audio) like and is it exposed to the relevant APIs?
... There's a concern there and we're trying to investigate it
so I'd love to speak to anyone about that.
<Zakim> tidoust, you wanted to invite people interested to join
the WG
kaz: The accessibility task force helped us to generate use
cases and they'll be discussed here this afternon.
<kaz> [42]second round use cases (including accessibility ones)
[42] https://www.w3.org/2011/webtv/wiki/New_Ideas
yosuke: Meeting adjourned. We'll meet back here at 2pm - now
it's time for lunch.
New Contributions from IG members
<alex_deacon> scribenick: alex_deacon
Moderator of this session is Glenn Deen - NBCUniversal.
Glenn_Deen: SMPTE Update - Open Binding of IDs to Audiovisual
Essence 24 TB Study Group. How to bind a content ID in a way
that survives compression and distribution thru the supply
chain. [43]https://www.smpte.org/standards/reports/
... There is technology available to do this. SMPTE will
establish a working group to develop a standard.
... The hope is to have a standards based mechanism that will
contain an ID for media. (most likely via some watermark
technology)
[43] https://www.smpte.org/standards/reports/
Mark_Vickers: What is EIDR and AdID
<Lingo> [44]http://eidr.org
[44] http://eidr.org/
Glenn_Deen: EIDR is a unique ID associated with content.
Mark_Vickers: Would SMPTE define an API?
Glenn_Deen: SMPTE would not do this, but this is something W3C
could do this.
Pascal: How does this relate to the ATSC work discussed
earlier.
Glenn_Deen: Some overlap between SMPTE and ATSC.
... GGIE - Glass to Glass Internet Ecosystem
<MarkVickers> www.ad-id.org
Glenn_Deen: Trends behind this idea. Many orgs, consortia and
companies involved in the complex digital video ecosystem.
... There is a need for an org to look at the “big-picture” and
to help coordinate between orgs/consortia/etc.
... Digital content is reaching a scaling well. more users,
more devices, more markets, more bandwidth.
... Profession and non-professional tools and workfolws are
intermixing.
... Video delivery is getting much more complex - many many
standards required. GGIE is meant to coordinate and facilitate.
... video data has exploded 256x
... Scaling issues: Bandwidth struggles to continue to meet
demand, growth of new devices, growth in users.
... GGIE working on use cases: Use Case #1 - Edge device
selects best source for digital content delivery.
... Use Case #2 - Edge device offers content to other devices
(edge device-device delivery).
... GGIE Goal - Help improve the state of the art in open
standards for both professional and non-professional digital
video focusing on all phases of the digital video life cycle
... Capture-Edit-Package-Distribute-Find-Watch
... GGIE is not a new SDO - Focus on identifying work in
existing SDOs.
... In scope - All content personal, prosumer, professional.
whole ecosystem, not just the web. Content workflows.
... Focus on Scalability, content identification, metadata,
user identity, privacy.
... Out of scope - non-Internet Delivery. Codecs/Encoding.
Encryption algs. Legal Topics. DRM.
... GGIE focus will be on use cases only.
... GGIE Outputs: Use cases and gap analysis.
... who has been talking about GGIE. Comcast-NBCU, Mozilla,
Cisco, ISOC, Ericsson, Civolution.
... GGIE needs an organizational home such as W3C.
... Proposal is to start GGIE as a “sub-thread” of Web&TV IG.
... Are we going to big.
Jean-Claude: GGIE is a beautiful idea, but doesn’t see a
workable path to achieve all that was discussed.
Glenn_Deen: Initial focus should be on low hanging fruit.
yosuke: we should break this big problem into pieces - to get
some early wins.
Glenn_Deen: crazy to do all things at once. need to prioritize
and focus on biz and user priorities.
... Things GGIE is a great opportunity - the time is right.
Will benefit content creators and consumers.
... there is more UGC uploaded to youtube in 2.5 days than all
studios have created in 100 years.
Mark_Vickers: Scope is broad, but now that we have specific
goals it is within reach of this IG. Liaisons will be key. As
we are limited to requirements the scope is more managable.
Jeff_Jaffe: Thinks IG is a good place for this work.
... Do we have the right stakeholders in this IG? Can we make
it sucessful? Do we have a list of stakeholders not currently
in W3C to ensure this is effective.
Glenn_Deen: No list exists.
Jeff_Jaffe: Need to look for both tech gaps and ecosystem gaps.
Need to think globally.
Glenn_Deen: Challenge is to bring together a diverse community
of specialists. Feels we can quickly fill gaps as they are
discovered.
Bill: Trying to get the structure and first steps (binding ID
via watermark for example) will get the ball rolling.
... Perhaps we need a task force for a next step.
Update on HTML5 EME, MSE and in-band resource issues
<jcverdie> scribe: jcverdie
[MSE and EME by Mark Watson]
Mark: MSE. Candidate Rec in July. Stable since then
... deployed and used quite extensively
... EME, Discussion continues (and will for a while)
... The TAG recently published opinion on EME and raised a
number of issues
... Still a few things are being discussed
... Good news is that it's deployed in various desktop browsers
and being used in various apps including us (aka Netflix)
... WebCrypto: stand-alone API to give access to cryptography
from the web browser
... Ready to exit LC and move to CR
... API is implemented and used in the field in various forms
Raphael: Overlap between media segment and media fragment spec?
Mark: not familiar enough w/ Media Fragment to answer
Raphael: State of bytestream registry in MSE?
Cyril: I don't know
<MarkVickers>
[45]https://dvcs.w3.org/hg/html-media/raw-file/tip/media-source
/byte-stream-format-registry.html
[45] https://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/byte-stream-format-registry.html
Mark V gives more details on this
Mark Watson: append doesn't only mean append to the end, it can
be "on top of" already placed media
... you can do any kind of slicing you like
Mark Vickers: example of audio track with various languages
<ddorwin> Gapless audio playback with MSE:
[46]http://dalecurtis.github.io/llama-demo/index.html
[46] http://dalecurtis.github.io/llama-demo/index.html
Glenn Eguchi (Adobe): difficulty if different part come from
different providers
ddorwin: I'm not an MSE expert. But you should use the mailing
list to ask
Mark Vickers: is this because you want a quick 1.0 version or
is there another reason?
ddorwin: I'm not sure tracks is well implemented at all
MarkVickers: HTML5 is going to REC tomorrow ... spec features
have to be implemented by 2+ implementors to be considered
final
... I want to mention that dataCue has been moved from 5.0 to
5.1 because of only Safari implemented it.
... Those of you interested in this critical data stream
access,
... maybe you can help to get more support in 5.1 with
additional implementations.
... Need more useful content to justify the implementation.
Chicken and egg problem
[Daniel Davis on in-band media tracks]
Daniel: Quick update, I'm not active on the Media Resource
In-Band Tracks CG
... "How to expose in-band tracks as HTML5 media element video
audio and text tracks"
... most of the work focuses on MPEG-2 TS now
(link available from the slides)
<MarkVickers>
[47]http://dev.w3.org/html5/html-sourcing-inband-tracks/
[47] http://dev.w3.org/html5/html-sourcing-inband-tracks/
Daniel Davis : The CG cannot create a standard but the work can
be prepared to pave the way of an actual WG work
scribe: HbbTV consoritum brought a proposal to increase DVB
support in addition to ATSC
Mark Vickers: if you have an app consuming a particular piece
of media, you would like a inband tracks to be the same and
found with the same name across all the browsers
scribe: this is *not* a req to say that a browser should
support MPEG2-TS or whatever
... more *if* you do support, please follow this
Yosuke: HbbTV provided the "european" vision for DASH and MPEG
John Folliot: is the list in the wiki a comprehensive list?
Giuseppe: I think these are coming from HTML5
(we are talking about captions, subtitles, descriptions...)
John: it seems there are new ones defined here
Cyril: this list was build from HTML5
... nothing was added
John: there are only 5 values specified in HTML 5
Cyril: the wiki hasn't been updated
François: the spec says the same
Cyril: the current draft has a general introduction. rules for
mapping in-band tracks to HTML text, audio or video
... some in-band tracks won't be exposed as they would be too
much expensive
<giuseppe> the list comes from the html5 spec
[48]http://www.w3.org/TR/html5/single-page.html#dom-audiotrack-
kind
[48] http://www.w3.org/TR/html5/single-page.html#dom-audiotrack-kind
Cyril: in some case on track might be exposed in multiple
tracks
... There are 17 bugs opened on the spec but you're welcome to
submit more
Afternoon break, we'll reconvene at 4pm
<MarkVickers> The different track kinds are in two tables:
[49]http://www.w3.org/TR/html5/single-page.html#text-track-kind
&
[50]http://www.w3.org/TR/html5/single-page.html#dom-audiotrack-
kind
[49] http://www.w3.org/TR/html5/single-page.html#text-track-kind
[50] http://www.w3.org/TR/html5/single-page.html#dom-audiotrack-kind
The Second Round Use Cases
<yosuke>
[51]https://www.w3.org/2011/webtv/wiki/Media_APIs#Submitting_Ne
w_Use_Cases_and_New_Ideas_for_Second_Iteration_of_Work_in_2014
[51] https://www.w3.org/2011/webtv/wiki/Media_APIs#Submitting_New_Use_Cases_and_New_Ideas_for_Second_Iteration_of_Work_in_2014
<giuseppe> scribe: giuseppe
<scribe> scribenick: giuseppe
[Yosuke provide overview of the approach, as described on the
wiki]
Yosuke:first iteration of UC/req/gap was done last year, result
was the TV control API CG
... second iteration started in May 2014, should be finalized
by end of this year
<yosuke> [52]https://www.w3.org/2011/webtv/wiki/New_Ideas
[52] https://www.w3.org/2011/webtv/wiki/New_Ideas
Yosuke: On the wiki, there is a list of UCs submitted by
members
... we originallu had 7 UCs, one was idenftified as already
covered
... of the remaining 6, one (triggered interactive overlay) was
identified as in scope for the TV control API CG
... the UC about clean audio came from the accessibility TF
... is about being able to separate the main dialog from other
audio source to be able to isolate it
... we examined this use case and we found out that the web
Audio API is already a good solution to equalize existing audio
track to fit individual's hearing character
... so the only gap was about synchronization of multiple
audio/video tracks
... so it was folded it in the media synchronization UC.
... first let's look at fingerprinting
... we looked at it from a user agent perspective
... although there are different algorithms there are some
commonalities and there should be a way to have a unified API
to talk to that module
giuseppe: do we really need an API for watermarking? isn't the
track API enough to "extract" an ID associated to a piece of
media?
... or was the idea to allow watermarking implemented in JS?
yosuke: We don't stick to create new APIs.
... If that's sufficient, let's go with that. But we still need
to define how we map watermaking and text tracks.
@@@: suggestion to avoid a plugin/modules approach, as it makes
standardization difficult.
yosuke: I share the point. However, module system should be an
architectual option thanks to EME, at least on early stages.
Dongyoung: do we really need a native module? can't this be
imeplemented in Javascript?
defining a generic API could be good anyway, someone could also
then implement a polyfill for it
glenn: did you consider video fingerprinting?
yosuke: not really, we started with audio as is was the main
use case.
... we decided to focus on audio, but just because we had
limited time
jc: what is the value of a fingerprint API? ... can't an app
just send the audio fragment to a server and get back a result
yosuke: as you need to generate an hash before sending to the
server, we thought you need an API to get this hash before
sendding to the server
yosuke: is there an overlap with the GGIE work?
Glenn: probably there is a lot of overlap, although we may be
looking at slightly different use cases
yosuke: What do you think about dealing with this in the "GGIE"
TF?
Glenn: We should go separate with sharing information.
yosuke: Sounds good. ... let's move to mediastream
synchronization
... we identified that Media Stream Synchronization is a basic
feature needed to cover different use cases
... we want to explore this further to see if this is covered
already by under standards
jc: I think UPnP has something about it. Starting from
syncronizing system clocks.
yosuke: we can use UPnP as an underlying mechanism but still
lacks UA APIs.
giuseppe: Media Controller API provides syncronisation feature
for HTML but we may have something we should enhance it with.
<MarkVickers>
[53]http://www.w3.org/TR/html5/single-page.html#mediacontroller
[53] http://www.w3.org/TR/html5/single-page.html#mediacontroller
giuseppe: also HbbTV and DVB have done work in this area
yosuke: also hybridcast.
... HbbTV and Hybridcast needed to extend Web standards to
achieve sync
... between broadcast signal and content from the Internet.
... that implies Web standards miss something to achieve our
use cases
... we need to discuss what would need to be defined in W3C
then
yosuke: what should be next step?
giuseppe: before we bring it up to a WG need to identify what
is missing
yosuke: ok maybe we should look at existing work
... where to do this work? let's continue here in the IG.
Wrap-up & Next Steps
<yosuke> scribe: yosuke
<scribe> scribenick: yfunahas
Giuseppe: Each session had its next steps.
... I just want to walk through the result as the wrap-up.
... We'll create a TF to deal with GGIE topics.
Mark: We raised awareness of TV guys about what's happening in
other groups and how to join.
... And Media APIs TF will continue its work: audio
fingerprinting and watermarking, and media syncronization.
Daniel: TV Control API CG has decided to move on to the next
step.
<jcverdie> tentative breakout schedule
[54]https://www.w3.org/wiki/TPAC2014#Session_Grid
[54] https://www.w3.org/wiki/TPAC2014#Session_Grid
Jean-Claude: What about "screen-less" second screen
presentation API?
... startSession is scheduled at 1:15pm in Sequoia
Igarashi: We'll talk about it during a break-out; the CG will
continue as a public discussion group.
Louay: I'm not sure that WoT view will fit the CG, which is
still focus on presentation devices.
Igarashi: I'd like to see what will be the conclusion of the
break-out session.
<tidoust> Breakout session on startSession for screen-less
APIs:
[55]https://www.w3.org/wiki/TPAC2014/SessionIdeas#startSession.
28.22WoT_devices.22.29
[55] https://www.w3.org/wiki/TPAC2014/SessionIdeas#startSession.28.22WoT_devices.22.29
... Your opinions are welcome.
Giuseppe: AOB?
Igarashi: On Wednesday, we'll talk about not only screen-less
devices but also privacy and security of devices in home
networks.
... HTTPS is a major way to achieve Web APIs security but lots
of home devices don't implement SSL; This is an issue.
David: CORS is also an important part of security of UA. What
about this?
<ddorwin>
[56]http://www.w3.org/TR/mixed-content/#authenticated-origin
[56] http://www.w3.org/TR/mixed-content/#authenticated-origin
<ddorwin>
[57]http://www.w3.org/TR/mixed-content/#is-origin-authenticated
[57] http://www.w3.org/TR/mixed-content/#is-origin-authenticated
igarashi: it's also an important issue. i mean how to define
origin in home network need some work.
... let's continue this discussion in the break-out session.
Giuseppe: Thank you for joining the meeting. The meeting is
adjourned.
Summary of Action Items
[NEW] ACTION: Bin to update the gap analysis to indicate what
is at an OS level. [recorded in
[58]http://www.w3.org/2014/10/27-webtv-minutes.html#action01]
[End of minutes]
__________________________________________________________
Minutes formatted by David Booth's [59]scribe.perl version
1.138 ([60]CVS log)
$Date: 2014-10-31 16:48:19 $
[59] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
[60] http://dev.w3.org/cvsweb/2002/scribe/
--
Yosuke Funahashi
co-Chair, W3C Web and TV IG
Chair, W3C Web and Broadcasting BG
Project Associate Professor, Keio Research Institute at SFC
Received on Friday, 31 October 2014 17:17:53 UTC