[minutes] Web and TV IG F2F - TPAC Lisbon - 2016-09-19

Hi Web and TV IG,

The minutes of last week's F2F meeting are available at:

  https://www.w3.org/2016/09/19-webtv-minutes.html

... and copied as raw text below for archival.

The group held a number of joint meetings with the HTML Media Extensions WG, the TV Control WG, and the Timed Text WG in the morning. The Cloud Browser Task Force met in the afternoon. Minutes are rough and sometimes possibly incorrect. Feel free to get in touch to have them fixed or completed!

Thanks,
Francois.


-----
Web&TV IG f2f meeting in Lisbon

19 Sep 2016

   See also: [2]IRC log

      [2] http://www.w3.org/2016/09/19-webtv-irc

Attendees

   Present
          Mohammed_Dadas(Orange), Kaz_Ashimura(W3C),
          Mark_Vickers(Comcast), Paul_Cotton(Microsoft),
          Francois_Daoust(W3C), Hyojin_Song(LGE),
          Louay_Bassbouss(Fraunhofer), Cyril_Concolato(Paristech),
          Dan_Druta(AT&T), Eric_Carlson(Apple),
          Alexandra_Mikityuk(Deutsche_Telekom),
          Kazuhiro_Hoya(JCBA), Satoshi_Mishimura(NHK),
          Kinji_Matsumura(NHK), Tatsuya_Igarashi(Sony),
          Kiyoshi_Tanaka(NTT), Shi-Gak_Kang(ETRI),
          MiYoung_Huh(ETRI), Toshihiko_Yamakami(ACCESS),
          Kenichi_Nunokawa(Keio_University), Tomohiro_Yamada(NTT),
          Barry_Leiba(Huawei), JP_Abello(Nielsen),
          Koji_Ikuno(FujiTV), Ingar_Arntzen(Norut),
          Sungham_Kim(ETRI), Keun_Karry(IOT_Connected),
          Jungo_Kim(Entrix), Taewon_Kim(Entrix),
          Olivier_Thereaux(BBC), Hiroki_Endo(NHK),
          Mark_Watson(Netflix), Jean-Pierre_Evain(EBU),
          Nigel_Megitt(BBC), Chris_Needham(BBC),
          Colin_Meerveld(ActiveVideo), Giridhar_Mandyam(Qualcomm),
          John_Foliot(Deque_Systems)

   Chair
          Mark_Vickers

   Scribe
          Francois, Chris_Needham

Contents

     * [3]Topics
         1. [4]Status of the action items from last TPAC
         2. [5]Joint session with HME WG - MSE/EME requirements
            from Cloud Browser TF
         3. [6]Joint session with HME WG - MSE/EME update
         4. [7]Joint session with Timed Text WG
         5. [8]Joint session with TV Control WG
         6. [9]Cloud Browser TF
         7. [10]Cloud Browser TF - Joint session with Web of
            Things IG
         8. [11]Cloud Browser TF - interface between the cloud
            browser and the client
     __________________________________________________________

   Mark_Vickers: [going through the agenda: joint session with the
   HTML Media Extensions Working Group (HME WG), then with Timed
   Text WG, TV Control WG and the rest of the day dedicated to
   Cloud Browser TF]

Status of the action items from last TPAC

   -> [12]Kaz updates

     [12] https://www.w3.org/2016/Talks/0919-webtv-ka/

   Kaz: The TV Control CG transitioned to a TV Control WG. Meeting
   tomorrow at TPAC.
   ... ATSC update, Mark will talk about that.
   ... [going through updates while scribe was fighting against
   the polycom]

   -> [13]Mark Vickers's Web and TV IG updates

     [13] https://lists.w3.org/Archives/Public/public-web-and-tv/2016Sep/att-0021/2015-09-19_WebTVIntro_v2_.pdf

   Mark_Vickers: The Web and TV IG takes inputs from members and
   standards orgs, discusses use cases and requirements. The IG
   does not do specs. From requirements, we may file bug reports,
   or kick off work on new APIs.
   ... Active task forces will meet today, Cloud Browser TF in
   particular.
   ... The Media Pipeline TF is done but contributed requirements
   for MSE/EME. It also lead to the work on Sourcing in-band
   streams, developed in a Community Group.
   ... That spec is not properly implemented in browsers for the
   time being. That's still a problem, and the CG is not really
   active for the time being, future is unclear there.
   ... I'd like to mention the Web Media Profile work that we did
   in the past. We put that on hold at the time, mainly because
   HTML5 was not yet stable at the time.
   ... There is an update here with a new Web Media API CG that I
   chair.
   ... The goal is to create a profile of HTML5 specs that are
   widely supported across media devices.
   ... I plan to present this work during a breakout session on
   Wednesday this week.
   ... Associated with CTA WAVE.
   ... The Home Network TF led to the work on the Network Service
   Discovery specification. This got abandoned due to security
   issues.
   ... The Timed Text TF pushed for TTM and WebVTT to be addressed
   in the same group. There will be a joint meeting later today.
   ... The Media APIs TF created a set of use cases and
   requirements.
   ... This led to the creation of the TV Control CG, which
   transitioned to a TV Control WG earlier this year.
   ... In terms of active Task Forces:
   ... 1. GGIE (Glass-to-Glass Internet Ecosystem) working on
   identifying new technical work from media capture to
   consumption.
   ... Main requirements are around content identification.
   ... In addition to that, the active work has transitioned to
   IETF drafts.
   ... It's about addressing media segments directly with IPv6
   addresses. The top of the address would be an identifier for
   the content, then more details as you go down through the rest
   of the address.
   ... There's work going on there. In the IETF since it touches
   on IP address.
   ... From a W3C perspective, the TF is a bit on hold.
   ... 2. The Cloud Browser TF is really active. Led by Alexandra.
   The afternoon will be dedicated to this TF.
   ... This work also brings some MSE requirements.

Joint session with HME WG - MSE/EME requirements from Cloud Browser
TF

   Alexandra: We kicked off the TF in January this year. Our goal
   was to identify the use cases and requirements for the
   interface between the client device and the cloud part.
   ... We are done with the architecture. We basically started to
   work on the use cases.
   ... We have started to work on EME and MSE because it's a
   complex topic. We need to do some more work on the use cases.
   ... This is just a first draft.
   ... For MSE/EME, the magic somehow happens in the cloud.
   ... The terminal is "dumb", it cannot do a lot.
   ... 3 different use cases have been identified.
   ... The first use case does not bring new requirements for MSE,
   it's meant for legacy devices.
   ... The second use case remotes the API. Things are transparent
   for the client.
   ... The third use case is the most interesting here.
   ... When the browser is fully in the cloud and when the MSE
   magic fully happens in the cloud, this creates new
   requirements.
   ... Some of the actions still need to be performed by the
   client.
   ... The client needs to send requests on behalf of the cloud
   browser.
   ... We're looking at solutions to inject identifiers.
   ... Another requirement is that the client should be able to
   signal the available bitrate data to the cloud browser.
   ... There should also be a way for the Web application to e.g.
   change the timestamps or manipulate the data somehow.
   ... A fourth requirement: there is no way for the client to
   know which XHR request is getting appended by calls to
   appendBuffer.
   ... This appendBuffer method should be able to signal to the
   client only the changes to the data.
   ... Another requirement: the client loads the chunks in any
   order. The chunk ordering must be made available to the client
   and the different resources must be made distinguishable for
   the CB client.
   ... One last requirement: XHR data does not necessarily
   correspond with appendBuffer. The browser needs to be notified
   about what and when data can be removed.
   ... Now looking at EME, again the third use case is the one
   creating new requirements.
   ... There needs to be a way to associate the IP address of the
   cloud browser with the address of the client, because both will
   request the keys at license servers.

   plh: I'm wondering how that translates into requirements for
   EME.
   ... We don't care about the details of what keys contain. This
   is opaque to the spec.

   Alexandra: The key server could block things because it
   receives requests to use the same key from different addresses.

   plh: My feeling is that this seems to be a requirement on the
   license server. But the EME spec does not address this.

   Paul_Cotton: The fact that the license server may be depending
   on the IP address of the cloud browser. EME does not know
   anything about that. That may be the case for some EME
   implementations.
   ... Why does the client also need to talk to the License
   Server?

   Alexandra: There is a more intelligent use case that does not
   appear on this slide where the cloud browser generates the UI
   and the video stream is processed by the client.

   Paul_Cotton: OK, I do not know whether that's feasible but now
   I understand.

   Mark_Vickers: The keys may also be retrieved by the cloud
   browser and used by the client.

   plh: I don't think that works at all. Imagine a "play" event,
   since you're not playing the video on the cloud browser but
   rather on the client, you don't get the "play" event.

   Alex: Yes, that's one of the MSE requirements I mentioned
   earlier.

   Paul_Cotton: In effect, what you want is you want to take part
   of the MSE/EME logic and move it over into another process and
   define the interface between these two different processes.
   ... You need to define all the communication going back and
   forth.
   ... The logic is very event driven, events need to flow back.

   Mark_Vickers: Note there are products that are deployed and
   used, done in a proprietary for the time being. Right now,
   MSE/EME cannot be supported, at least not in any standard way.
   ... The idea of the Cloud Browser TF is to see whether we can
   standardize across these groups.

   plh: This is somewhat sorcery to me.
   ... Take the Youtube UI, there's an interface on top of the
   video.

   Colin: We render the UI in the cloud and send the video to the
   client.

   plh: How do you do compositing?

   Colin: We manage to do it on the client

   Alexandra: There is of course some code that runs on the
   client.

   Mark_Vickers: This is an industry that is using Web
   technologies and does not get a lot of visibility in the W3C
   world.
   ... It's used by millions across the world.
   ... MSE/EME is just one of the problems.
   ... That's a very interesting space. CE devices often have
   longer lifetime than browser on desktops, the cloud solution
   helps alleviate these constraints.

   Alexandra: [clarifying the cloud browser architecture]

   Mark_Vickers: MSE is running in a "normal" way from a cloud
   browser perspective, and you're worried about relaying what is
   happening on the front.
   ... That could just be a detail of your HTML user agent.
   ... In other words, it could be below the Web level. But if you
   do that, you cannot develop clients that are independent of the
   cloud browser part.

   Alexandra: Right.

   Mark_Vickers: If you want to do a spec to define the interface
   between the front and back part of your pipeline, it seems to
   me that you need a separate spec, not embed it in MSE.
   ... The JS player in the MSE model makes a decision about
   bitrates based on bandwidth and so on. Doesn't the client need
   to live with whatever the bandwidth of the cloud browser is?

   Alex: That's one of the questions. Could the client communicate
   current bandwidth metrics to the cloud browser?

   [Discussion on manipulating the data available on the client
   from the cloud browser, linked to one the MSE requirements]

   plh: I think you'll have to have limitations. You're not going
   to send video bits back to the Web app, that's not efficient.
   ... For instance, if you take EME, you cannot take the bits
   coming out of EME and put them in a canvas. That would defeat
   the point of EME.
   ... In most cases, the client is not going to modify the bits
   coming out of MSE, so you should be fine.

   Mark_Vickers: I think that's a good first taste of this cloud
   browser world.
   ... We welcome the vendors doing it who joined the discussion
   the Cloud Browser TF. I'm really glad that people are
   discussing this in W3C.

Joint session with HME WG - MSE/EME update

   Paul_Cotton: Both specs are at Candidate Recommendation phase.
   That's where we focus on testing to check implementations.
   ... MSE is more advanced. We had a CfC last week to request
   publication of MSE as Proposed Recommendation.
   ... The CfC passed, so I sent the transition request on
   Saturday last week.
   ... We're anticipating going the call for review to the AC
   soon. We're busy assembling the transition request.
   ... All of the relevant information is publicly available today
   (issues, test suite, test results, spec).
   ... We're hoping to get a final Recommendation of MSE first
   week of November.
   ... That would give us the first MSE Recommendation.
   ... EME is not quite as far along in the process as MSE.
   ... It's been published as Candidate Recommendation as well.
   ... Since then, editors have made a number of editorial
   changes.
   ... There's a lot of testing that still needs to be done for
   EME.
   ... We have had a series of tests submitted by Google some time
   ago but these tests need to be converted to Web Platform Tests
   format.
   ... Mark Watson from Netflix has been doing a lot of this work.
   ... When I talk about the results of the W3C test suite, you
   can check the archives of the mailing-list.
   ... It is not clear to me or Philippe what the results are so
   far. Implementers seem to have chosen different features in the
   spec.
   ... I hope we'll have a better perspective within two weeks.
   ... We already have a number of formal objections recorded for
   EME and note there will be a public demonstration on Wednesday.
   ... I think that's the status with EME. I cannot give you a
   prediction as to when we can go to the Director for a
   transition to Proposed Recommendation.
   ... I should note that the charter for the HTML Media
   Extensions WG expires end of September, meaning next week.

   plh: Let's be clear. If I don't have a clear plan as to when
   the spec is going to be finished, I cannot tell whether the
   Director will approve the extension. This is serious.
   ... On the one hand, we have people telling us not to finish
   EME. On the other hand, if people who care about EME do not
   inject resources to finalize the spec, I cannot go and ask the
   Director to extend the charter.

   Giri: From a broadcaster perspective, we've identified a hang
   up from a crypto perspective on EME.
   ... Do we need an EME version 2?
   ... Why talk about version 2 if we still don't know whether
   version 1 is going to be done in the end.

   Paul_Cotton: There are 10s of features that we triaged out of
   v1. Version 1 does not solve everything that the community
   wants.

   Giri: Would it make sense to include features in v1 right away
   since it's not done yet?

   plh: No, let's be clear. I cannot recommend the Director to let
   the work on EME continue if we cannot get version 1 out soon.

   Mark_Watson: Two problems. For testing, there's not a lot of
   things that remain and it should be easy to work on a proper
   plan to finalize the test suite.
   ... Another problem is implementations, which fail some of the
   tests.
   ... If we go ahead with progress on the Recommendation track,
   we may not get implementers feedback that could improve the
   spec. I'm fine with this approach, just noting that.
   ... I also note that DRM on the Web is a market need.

   Mark_Vickers: I think that we must finish v1. That's absolutely
   necessary.
   ... In terms of motion: committing testing resources should be
   easy to do. For spec compliance, there are 4 codebases that we
   are talking about, and I don't see what I can do there. I'd
   like to hear from them what their implementation plans are.

   Paul_Cotton: I made a suggestion this morning that bugs should
   be opened against implementations so that we can at least point
   out these bugs when we face the Director.

   Mark_Vickers: Third thing is that I see some editing going on
   today.

   Paul_Cotton: That's really editorial and normally that's done.
   David mentioned last week that he was done.

   Mark_Vickers: Is there more we should be doing?

   plh: How long is it going to take? That's the question.

   Mark_Vickers: I guess that's what the F2F should discuss here
   at TPAC.

   plh: If we don't have interop, the motivation for W3C is going
   way down. We need to solve this for version 1. And we need to
   do it fast.

   <Zakim> wseltzer, you wanted to comment on politics

   Wendy: On the politics front, clearly there are some people who
   are misconceiving the role of W3C here.
   ... Even if the market wants DRM, that does not necessarily
   mean that W3C should recommend something in the area.
   ... There are valid concerns, e.g. around the possibility to
   investigate these interfaces from a security perspective.

   Giri: We seem to have interop issues. Past experience in
   Geolocation WG, we went through a lot of pain due to interop
   issues, but that did not mean the spec died.
   ... Do we need to put some statement in favor of the charter
   extension?

   plh: You need to finish the spec.

   Mark_Watson: I have a bit of a concern if we say that the fact
   that there are external protests should influence our internal
   process. I'm all fine to addressing the issues that have been
   raised against EME rationally, of course.

   Mark_Vickers: I would add that there is an interoperability
   milestone that has been achieved already with MSE/EME, even
   though it involves polyfills which is not entirely acceptable.
   ... We have deployments on the Web, using different codecs,
   different DRMs.
   ... There used to be zero interoperability. Now we have content
   that is independent of DRM that can be deployed across the
   world.
   ... I'm not saying that's enough, but that's unprecedented in
   the video world.

   Paul_Cotton: [without Chair hat off]. I find it frustrating
   that W3C can charter a group like this, go to CR and then
   decide to kill the group. I find it phenomenal that after
   having chaired this group for several years, we hear that this
   work is at risk because it's not making enough progress,
   especially given the progress that was made in the last few
   months.

   plh: With all due respect, previous re-chartering has been
   painful and something that I do not want to reproduce. I need a
   proper plan.

   Mark_Watson: That seems like the usual way of doing
   specifications though. We don't have a lot of control on the
   implementation front. Other groups just carry on while there
   are issues to resolve.

   Mark_Vickers: For example, Web Crypto.

   plh: My problem today is that we don't even know what we're
   lacking in terms of implementation.
   ... I'm not blaming people here who have been active of course.
   ... Don't tell me it's important to you if you did not put
   resources into it since April.

   Jean-Pierre_Evain: We all know that standardisation is about
   masochism and frustration. Nothing new here.

   Paul_Cotton: I would suggest that those of you who are in the
   room and interested show up for the HME meeting today and
   tomorrow and contribute to the work.

   [coffee break, back at 11:00]

Joint session with Timed Text WG

   nigel: The TTWG was rechartered earlier this year
   ([14]https://www.w3.org/2016/05/timed-text-charter.html) with
   no significant change in scope.
   ... The TTWG is working on a draft of a Note, available as an
   Editor's Draft at
   [15]https://w3c.github.io/ttml-webvtt-mapping/ for mapping
   between TTML and WebVTT.
   ... The TTWG has published a Note listing profiles of TTML and
   initiated a process to update the media type registration to
   allow richer description of which profiles of TTML a processor
   needs to support in order to process any given document.
   ... The TTWG published earlier this year a Recommendation for
   Internet Media Subtitles and Captions, consisting of a Text
   profile and an Image profile of TTML 1. This is known as IMSC
   1.
   ... The TTWG is currently focusing on TTML 2 with amongst
   others, the goal of supporting the text layout requirements of
   every script globally. The group intends to update IMSC to IMSC
   2 to incorporate changes that are appropriate for this use
   case, from TTML 2.

     [14] https://www.w3.org/2016/05/timed-text-charter.html)
     [15] https://w3c.github.io/ttml-webvtt-mapping/

   Mark_Vickers: Some people talk about simplification of IMSC1.
   You talk about adding new features.

   Nigel: I'm not aware of on-going plans to simplify IMSC1.

   Giri: We're still struggling in the broadcast world with
   timestamped events. Is this being addressed by the on-going
   re-chartering process?

   Nigel: Let' come back to that question

   nigel: WebVTT's current status is Working Draft.

   Mark_Vickers: Is the CG still active?

   Nigel: Yes, it is. Some participants have moved on.

   nigel: There has been increasing usage of various profiles of
   TTML by other standards bodies including SMPTE, EBU, DVB, ARIB
   and HbbTV.

   Nigel: There's also some work going on at MPEG on CMAF.
   ... There's a general issue with video media and timed text.
   There can be mismatches between the aspect ratio of the video
   and the rectangular box that is to contain timed text.
   ... That affects positioning.
   ... Then there's a general question on the management of time.

   Andreas_Tai: I'd like to address one issue that I think could
   fit within the mission of the Web and TV IG to identify gaps in
   existing technology.
   ... The issue is on how to make use of TextTrack and
   TextTrackCue interfaces
   ... To add a TextTrack to a media element, there are attributes
   that you can use.
   ... What is missing though is some way to identify the MIME
   type of the track.
   ... One solution would be to add a "type" attribute to the
   track element.
   ... That would be similar to the "type" attribute to the source
   element.
   ... A TextTrack is a set of TextTrackCue. These cues are
   defined in a format independent way.
   ... There are some events for when a cue gets active and when
   it becomes inactive.
   ... Apart from the Edge browser, there is no way to initialize
   a generic TextTrackCue.
   ... What is implemented is a specialization of the
   TextTrackCue, in other words a VTTCue.
   ... You can initialize a VTTCue and then tweak it, which is
   probably not the way that it should work.
   ... What could be done is to make sure that there is a
   constructor for a generic TextTrackCue.
   ... We could go further and add an attribute for the payload.
   Also we could define a new API for specialised TextTrackCue
   (e.g. SubtitleCue or HTMLCue that got discussed last year).
   ... I'm not sure what the Web and TV IG could do here, but that
   sounds like the right place to gather requirements.
   ... We propose a breakout session on Wednesday to discuss these
   issues.

   Mark_Vickers: There is some history on that issue.
   ... Another question is to understand what the user agent has
   to do with text tracks that come within the transport stream.
   ... This relates to the [16]Sourcing In-band Media Resource
   Tracks from Media Containers into HTML spec that is currently
   in limbo.
   ... It's referenced by HTML5 but not implemented yet.
   ... We need a place to publish standard mapping between
   transport standards and HTML5 types.

     [16] https://dev.w3.org/html5/html-sourcing-inband-tracks/

   Giri: There are some timing issues that become problematic with
   TextTrackCue. There are additional delays triggered by the user
   agent having to process the cues and so on.
   ... Is the effort on TextTrackCue going to look into that?
   ... e.g. for tuning into a channel.
   ... There are other approaches, e.g. creating a new type of
   cues, possibly done in CMAF.

   Andreas_Tai: I think the "type" attribute would address some of
   this. The question for me is where should be the home of this
   issue.

   Giri: I also encourage you not to start with WebVTT, but rather
   with TTML, SMPTE, beecause that's what the broadcaster world
   uses.

   Andreas_Tai: To be clear, I don't propose to improve support
   for VTTCue. The TextTrackCue exists independently of that and
   we shouldn't be using VTTCue here.

   Glenn_Adams: The VTTCue was meant to be generic although
   implementations have not implemented the generic aspects of it.
   DataCue was the closest thing to it.

   Andreas_Tai: I encourage people to come on Wednesday to discuss
   this. I think the Web and TV IG is the right place to gather
   requirements.

   Nigel_Megitt: Another topic I wanted to touch upon, is
   requirements for audio(video) description.

   [presenting a requirements draft document]

   Nigel_Megitt: My intent is for TTML2 to support this. The
   intent is that any mixing directive would be addressed by Web
   Audio.
   ... I submitted this doc to the Timed Text WG and to the Web
   and TV IG.
   ... A couple of other things to mention: one of the things that
   have been missing is how you implement accessibility
   requirements for subtitles.

   -> [17]BBC Subtitle Guidelines

     [17] http://bbc.github.io/subtitle-guidelines/

   -> [18]EBU-TT Live Interoperability Toolkit

     [18] http://ebu.github.io/ebu-tt-live-toolkit/

   Nigel_Megitt: From an EBU perspective, there's a draft
   specification around live interoperability toolkit to generate
   EBU-TT documents.

   Mark_Vickers: Thanks for the great update!

Joint session with TV Control WG

   Chris_Needham: The purpose of the WG is to work on an API for
   sourcing media, such as TV and radio from broadcast, IPTV, or
   other sources, allow their presentation onto HTML media
   elements, and be agnostic of underlying transport streams.
   ... It started in 2013-2014 within the Web and TV IG with a
   couple of use cases related to tuner control. Following this,
   we created a Community Group (2014-2016) to gather requirement,
   compare existing APIs and draft an initial version of the TV
   Control API specification.
   ... Mozilla was active in that group as part of their Firefox
   OS for TV effort.
   ... We transitioned to a Working Group in April 2016 and
   published a First Public Working Draft of the spec, which is
   basically the same version as the one developed by the CG.
   ... The spec addresses different features: enumeration of
   tuners and sources, channel selection, playback, Conditional
   Access Modules, Timeshifted plabyack, recording, etc.
   ... The WG may decide to split some of these features out of
   the main spec.
   ... [showing examples of API usage]
   ... The interesting part here is the ability to associate the
   MediaStream to a video element in HTML5.
   ... In terms of current work and next steps, there's some
   effort to adapt the API to radio devices.
   ... We'd like to support radio as TV with this API.
   ... It brings some new requirements in terms of new information
   to expose.
   ... We also collaborate with the Auto BG.
   ... User privacy is a big thing that we identified in the
   group. At the moment, the spec does not define the execution
   context that it runs under.
   ... There's a question as to whether the API is tied to the
   runtime of the device, or whether the API is exposed to more
   general applications.
   ... Some features could be fine for device runtime, but
   probably not for general application runtime.
   ... Also, what are the access controls to media and metadata,
   coming from content providers and broadcasters?

   Giri: Will you be considering application-driven ad-insertion
   as part of this topic?

   Chris_Needham: From a broadcaster perspective, this is highly
   relevant yes.

   Giri: Are you thinking about integrating the Permissions API
   for features that could require a more privileged / device
   specific context?

   Chris_Needham: That's a good question. I guess we'll want to
   reduce the number of interactions with the user.
   ... Integrating with the Permissions API seems like a resonable
   approach.

   Giri: In ATSC, we've been considering that broadcasters could
   stream their own application, similar model as in HbbTV.
   ... In that model, user privacy is somewhat less of a concern.
   The open web is not really the target.

   Chris_Needham: I agree.
   ... Issues for the group: there is a lack of editorial effort.
   There is an open opportunity for anyone to take on the editor's
   role.
   ... It would be interesting to know who's looking at this API
   from TV industry groups, such as ATSC.
   ... More generically, we need more feedback from industries on
   whether we're going on the right direction and support that
   effort.

   Mark_Vickers: Do you see this running only on tuner-centric
   devices? Or also on devices that do not have a tuner?

   Chris_Needham: I see this as a sourcing API. I have a slight
   concern about this being too tight to the tuner hardware.
   ... If other groups could review the spec and provide feedback
   on how well it aligns for other sources, that would be great.

   Mark_Vickers: So you're saying that this should work in
   situations where there are no tuners but that this hasn't been
   done yet?

   Chris_Needham: Correct.

   Mark_Vickers: Another question. The notion BBC2 is independent
   of its delivery mechanism. Is the name going to be BBC2 or will
   it be tied to the tuner and source?

   Chris_Needham: I don't think that's an aspect that the group
   has particularly been looking at. It may be that these things
   vary, e.g. for regional reasons.
   ... At the moment, the way the API is structured is that BBC2
   streamed through a given source is different from BBC2 streamed
   through another source.

   Mark_Vickers: So I don't have a name that is independent?

   Chris_Needham: Not currently.
   ... In our meeting tomorrow, we do have a session on
   integrating with metadata vocabularies. I think that's an
   important aspect to cover.
   ... Existing published metadata should be available for use.

   Mark_Vickers: Yes, that's a problem I'm familiar with. Right
   now, there's no way to correlate sources with published data.

   Chris_Needham: Anyone with an interest on that topic would be
   more than welcome.

   Tatsuya_Igarashi: Comment on ad-insertion. The current spec
   does not really address this problem.

   Chris_Needham: Correct.

   Mark_Vickers: Thanks for the report. This concludes the plenary
   part of the F2F. Next on: the Cloud Browser TF.

   [lunch break]

Cloud Browser TF

   alexandra: [introduces the cloud browser tf]
   ... the task force's mission is to look at use cases for a
   cloud browser architecture
   ... and requirements for interfaces between the cloud browser
   and client
   ... you can see the high level architecture on the Cloud
   Browser TF wiki page
   ... The cloud browser is a flexible approach supporting
   different ways of being deployed

   Colin: The browser runs in a cloud environment and streams the
   UI to the runtime environment
   ... This displays the stream and sends information to the
   cloud, eg, keypress or tuner information
   ... Also streaming of out-of band media

   Alexandra: We have dependencies on other groups, such as HTML
   Media Extensions, TV Control, Multi-device timing,
   Accessibility platform architecture
   ... are there others to add to this list?

   Louay: Also the Second Screen WG, which has two specs: the
   Presentation API and Remote Playback API
   ... We can reference the existing use cases, which are be the
   same
   ... Maybe there will be additional requirements for the cloud
   browser

   <Louay> -> [19]Second Screen Use Cases and Requirements

     [19] https://github.com/w3c/presentation-api/blob/gh-pages/uc-req.md

   Alexandra: [some discussion of what are dependencies and what
   is related work]
   ... I want the group to discuss what our final goals are,
   different approaches
   ... Looking at what we've done so far, Deutsche Telekom have
   been looking at the TV Control API
   ... We've written a Cloud Browser introduction on the TF page
   ... It clearly describes what the CB is
   ... The introduction needs reviewing
   ... We'll produce an official document from this after TPAC
   ... We're now half-way through use case and requirements, with
   a plan to finish by end of march
   ... There are open questions, eg in terminology: zero-client or
   runtime environment

   Dan: Why not just call it the cloud browser client

   Colin: This introduces some ambiguity
   ... [Presents [20]Introduction cloud browser]

     [20] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/Introduction_cloud_browser

   Alexandra: I'd like people to review the [21]Cloub Browser
   architecture

     [21] https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/Architecture

   Alexandra: [ presents an overview of the architecture page ]
   ... There are four approaches
   ... For each approach, we've assigned functionality between the
   cloud environment, cloud browser, and client
   ... This work is ready to be finalised

   Dan: How important is synchronisation, and has that been
   considered, eg, keeping the UI in sync with the media?

   Alexandra: We have a use case for that

   Colin: It is very important, it needs to be precise

   Dan: How do you handle failures on the UI side, eg, if
   connectivity for the UI is lost?

   Colin: This currently depends on the implementation, but would
   need to be standardised

   Dan: Although you're trying to be stateless, this would some
   tiny piece of state information

   Alexandra: Synchronisation is also important for accessibility

   John_Foliot: Is the primary use case here for video, or as a
   general purpose browser?
   ... What is the input mechanism? A traditional computer has a
   keyboard or pointer or touch interface

   Alexandra: So far we've focused on video delivery, and input
   with a remote control
   ... It's not a remote desktop, more a TV user interface

   Colin: We don't have solutions yet for DRM bridging
   ... So far we've focused on architecture, but we should define
   new use cases

   Cyril: What about subtitles?

   Colin: If these are rendered by the cloud browser, it's easy,
   but needs synchronisation at if rendering in the client
   ... Similar case for advertisements

   Louay: If you send the timelines for both streams, need to
   ensure all streams are in sync
   ... We should clarify synchronisation of multiple streams to a
   single device, and synchronisation for companion devices

   Ingar: If the cloud browser means moving functionality to the
   cloud, then synchronisation also moves to the cloud

   Alexandra: We can capture these as several use cases

   John_Luther: Is it a goal to standardise the control protocols?

   Colin: Not at this stage, so far only use cases, but protocols
   might happen outside W3C

   Dan: Synchronisation between devices is more a requirement than
   a use case

   ???: I agree, and the most critical point could be latency

   <tidoust> scribenick: tidoust

   [session resumes]

   Alexandra: Thanks to everyone for propositions in last session.
   ... Continuing with the use cases now.
   ... Core use cases are those that enable the communication
   between the client and the cloud browser.
   ... That includes Control, State, Session, Communication use
   cases.
   ... We have to think about Authentication use cases, whether we
   have to include them in our task force or not.
   ... Then discovery and synchronization.
   ... The use cases appear on our Wiki.
   ... On top of these core functions, we have essential services
   use cases for data exchange between the client and the cloud
   browser.
   ... Here we have video and audio use cases, UI use cases. And
   then of course security use cases.
   ... Also accessibility that we need to talk about.
   ... We put the major TV service use cases and went through them
   to see if anything was missing from our core platform.
   ... This should be mapped onto core and essential services use
   cases.

   Tashiki: My sense is that these use cases depend on how you
   articulate the cloud browser work. For instance, you may have a
   low-level device with rendering that happens on the cloud.
   ... Articulating the use cases depending on the type of cloud
   browser will give different perspectives.
   ... Sometimes, there is very limited power to embed a browser.
   In that case, the UI use cases viewed from a client perspective
   seems out of scope for W3C.
   ... I have no objection to doing the work to identify
   requirements at W3C, but work may need to be done outside.

   Colin: Right, when we started this work, we thought in terms of
   use cases, but then we realized that people had different
   things in mind when referring to cloud browsers. So we worked
   on an architecture document to start with, instead.

   Louay: They are two types of synchronization: multiple stream
   synchronization and multi-device synchronization.
   ... As a core function, we could have multi-stream
   synchronization. Multi-device synchronization would be at the
   essential services.
   ... We don't need to have a use case for multi-device
   discovery, communication and synchronization. We can reference
   the Second Screen Presentation API there.

   Colin: We don't use discovery in practice, but there may be a
   future need for this.

   Alexandra: Use cases have question marks. We still don't know
   if we're going to include them or not.

   Colin: Maybe this is a good opportunity to discuss whether this
   is in scope.

   Alexandra: What is your perspective on authentication?

   Chris: What are the use cases that drive this?

   Colin: I don't think we have any for authentication for the
   time being.

   Kang: Are we talking about authentication with a server?

   Alexandra: We were thinking about authentication between the
   client and a server. I don't know whether it is in scope of
   not.
   ... We have identified control use cases. They still have to be
   reviewed.
   ... TF participants are encouraged to review the use cases.
   Others may chime in as needed. Work happens in public.
   ... For state, we have e.g. tuner state, which relates to the
   TV Control API.
   ... For session, use cases are somewhat similar to the control
   use cases and we had a discussion on whether they could be
   merged. For the moment, we kept them separate.
   ... For communication, we have a use case but it needs to be
   re-written following the use case template.
   ... I wasn't clear whether we have requirements derived out of
   that use case.

   Colin: Right, that still needs to be assessed.

   Alexandra: For security, we'll discuss with the Web of Things
   IG. Based on side discussions with Louay, maybe we do not need
   to decide in the TF.
   ... Discovery could be part of a multi-device use case.
   ... Same for synchronization, where multi-stream
   synchronization should stay here but multi-device sync should
   be moved to multi-device use cases.
   ... Moving on to essential services use cases. The payload
   between the client and the cloud browser.
   ... Here, we have identified a video use case (mse and tuner
   use cases).
   ... We mentioned MSE use cases in the morning during the joint
   session with the HTML Media Extensions Working Group.
   ... Review is missing for tuner use case.
   ... Some low-level approach could perhaps lead to a situation
   where no update is needed to MSE. But of course, there are
   drawbacks with any approach.
   ... We also have an overlapping use case for video. The
   question is whether it's a real use case.

   Colin: It does not seem so.

   Alexandra: it needs to be performed by the client and not
   provided by the API?

   Colin: Right.

   Alexandra: Then we have the audio. The background of this use
   case is accessibility where you provide some sound effect. It
   will reference the synchronization use case.
   ... Not sure whether the group wants to address this as a use
   case.
   ... I'll take this to John.

   Kang: We're also think about double streams for audio, right?

   Colin: When we say video streams, we mean media streams.

   Alexandra: That's a good comment, maybe we should update the
   term.
   ... I'm not clear whether there's an accessibility API that
   browsers need to support.
   ... I think the UI EPG use case can be handled as part of
   normal function use cases. To summarize, the AIT table is
   available on the client, but to build the EPG, we need this
   info to be shared with the cloud browser.

   Kang: In the IPTV case, that's not really the case.

   Alexandra: Sometimes we get different IDs for channels and we
   need to fuse them, that can be very tricky.
   ... We also have the UI Switch that was brought by Entrix.
   ... On very old browsers, we cannot deliver Youtube for
   instance, and that's when you'll want to switch to a cloud
   browser.

   Colin: More generically, how you execute native applications on
   the client. It could be through a Web browser.
   ... In a cloud browser architecture, you would like to have
   everything in the cloud, even the main device UI.

   Alexandra: So, that's the interface between cloud browser and
   client app environment.
   ... For security, we have EME use cases with two different
   approaches, including a complicated one where you try to
   duplicate things across the client and cloud browser.

   [Discussion on accessibility requirements, in relation with the
   Presentation API, the Remote Playback API and the Cloud Browser
   TF. Mentioning Media User Interface Accessibility Requirements:
   [22]http://www.w3.org/TR/media-accessibility-reqs/ ]

     [22] http://www.w3.org/TR/media-accessibility-reqs/

   Alexandra: Going through essential services use cases. Most
   need to be described (VoD, timeshift, Ad-insertion, gaming,
   etc.)
   ... We have added the catch-up service as well. Also the
   OTT-based video app (amz, nflx) use case.

   Louay: About HbbTV, maybe use Hybrid TV application, because it
   could be HybridCast or some other standard.

   Alexandra: OK.
   ... The exact scope of this task force is not entirely settled
   but that can probably be done over time.

Cloud Browser TF - Joint session with Web of Things IG

   Alexandra: No specific agenda, we just thought it would be
   useful to discuss.
   ... [presenting the cloud browser task force]

   Joerg: Chair of the Web of Things IG.
   ... We're coming from quite a different side. We'd like here to
   keep you informed, not sure how much of it will be useful.
   ... Status update and on-going AC review on the proposed Web of
   Things WG charter.
   ... We're trying to interconnect silos. Different application
   domains such as consumer home, transport, health, cities, etc.
   ... The goal is to find synergies.
   ... IoT is very much looking at how you can share information
   across these domains. WoT is looking at how you can develop
   applications across these domains.
   ... We're looking into this, e.g. because there are much more
   Web developers than embedded developers (711000/3800 looking at
   LinkedIn profiles).
   ... Also we want to make applications easier to write to enable
   the long tail marked for embedded devices. Also Web
   technologies are useful for Web-grade multi-stakeholder
   security.
   ... That's quite interesting to see how we can learn from the
   Web here.
   ... Finally, it helps simplify the integration of embedded
   devices.
   ... Now, this opens a number of questions around scope. We
   don't want to be too open and generic, and don't want to be too
   specific either.
   ... The first four questions we identified: discovery, how do
   things find each other? How do things describe themselves?
   Privacy and Security? Scripting APIs?
   ... Standardization at the IETF is taking care of the
   protocols.
   ... So information can be exchanged. To be able to make
   applications, we need to make additional building blocks.
   ... The IG has quite a lot of different companies in there.
   ... We started to work in Sprint 2015 on use cases and
   requirements. This is tricky.
   ... We're doing plugfest to prove the interoperability of our
   proposed solutions.
   ... Looking a bit more at the inside, we have the "Servient",
   which can be running on the Server or on the Client.
   ... The resource model describes what the thing can achieve.
   ... Different protocols can be used through protocol bindings,
   e.g. CoAP.
   ... The Thing Description allows you to discover what that
   thing is doing and how you can interact with it.
   ... [going through plugfest example]

   Alexandra: Thank you for the introduction.

Cloud Browser TF - interface between the cloud browser and the client

   Alexandra: When we started this work, we thought we'd be able
   to come up with a browser API that we could propose to browser
   vendors. So we started to work. There are lots of
   graphic-related tasks.
   ... The basic question is: do we try to make an API for the
   browser? Or do we work on the level below, which could end up
   being developed at IETF?
   ... If we have an API, the application will need to use it,
   which gives it more control.
   ... [projecting some open questions on screen]
   ... These questions might change during the discussions.

   Louay: From my perspective, having worked on the Presentation
   API. For now, there are different implementations of the API on
   top of different protocols.
   ... From an application perspective, it's the same application.
   Of course, it's not interoperable between implementations, but
   that's the goal of the new work being carried upon by the
   Second Screen CG.
   ... Here, it could be similar if we develop an abstract API
   that could be implemented on top of different protocols.
   ... I think this approach could work.
   ... In the future, interoperability between different browsers
   is better.
   ... It may not be W3C specifications, maybe IETF specs.

   Francois: What consumes the API? No Web runtime on the client,
   right?

   Colin: It depends, there may be a low-end browser runtime on
   the client, and the app may be willing to connect to a Cloud
   Browser.

   Francois: OK, so you want a way to retrieve a MediaStream out
   of a Cloud browser and then pass on events and the like in
   between the devices. That resonates with some v2 use cases for
   the Presentation API where the group may consider cloud-based
   second screens.

   Kaz: When we say API here, it does not necessarily mean a JS
   API, right? It could be an API between the client runtime and
   the cloud browser runtime. It would be built on top of
   WebSockets for instance.
   ... In the Automotive group, the group has been talking about a
   sockets-based approach.
   ... The Cloud Browser TF guys might want to connect with Auto
   folks tomorrow.

   <kaz_> [23]Vehicle Signal Server specification by the
   Automotive WG

     [23] http://w3c.github.io/automotive/vehicle_data/vehicle_information_service.html

   Alexandra: Maybe it's interesting to take a look at who would
   like to implement this possible API in devices.
   ... Thanks for your participation.

   [End of minutes]

Received on Monday, 26 September 2016 07:52:49 UTC