FOBTV and DLNA (was: Re: A Rough Mapping Between Hybrid TV Functional Areas and W3C Spec/Group Scopes

Hi all,
(CCing the TV Control API CG)

Olivier Carmona kindly volunteers to provide public input on FOBTV and DLNA into the mapping table; He and I will be working collaboratively on extending the table to cover the two specs next week. We’ll draft it off list and ask the IG member to review and comment on the initial draft later on the list. If you’d like to contribute the initial drafting work, please let me know. I’ll put you in the loop.

The workflow for this extension is:

1) Fork the table.
2) Start editing the forked table to include public input on FOBTV and DLNA.
3) Announce the IG that we are working on it, and let people who are interested in this work in the loop. <— HERE
4) Finish creating the draft table.
5) Call for comments about the draft table on the IG ML.
6) Update the table reflecting comments.
7) Merge the updated table to the original table, which might have gotten improvements from other IG members on other parts.

Regards,
Yosuke


> On Apr 18, 2015, at 4:27 PM, Yosuke Funahashi <yosuke@funahashi.cc> wrote:
> 
> Hi all,
> (CCing the TV Control API CG)
> 
> The mapping table of the living version [1] has been updated reflecting the conversation I had with Ingar, the chair of the Multi-Device Timing CG. This update improved and enhanced the functional areas through analyzing the CG's use cases [2], also resulted in introducing a new background color, 'missing in the existing Hybrid TV standards,' which would be beneficial to both Web and TV stakeholders.
> 
> For the detail of the conversation, see the quoted text below.
> 
> Any comments would be highly appreciated.
> 
> [1] http://goo.gl/KEhXPd
> [2] http://webtiming.github.io/timingobject/#use-cases-and-requirements
> 
> Regards,
> Yosuke
> 
> 
> On Fri, Apr 17, 2015 at 10:29 PM, Yosuke Funahashi <yosuke@funahashi.cc> wrote:
> Hi Ingar again,
> 
> The idea of distributed recording and playability sounds quite interesting. I'd like to see the discussion in the CG about it and its use case description on the editor's draft, which would tell us where or how the use case or functional area fits best in the table.
> [snip] 
> 
> On Fri, Apr 17, 2015 at 9:19 PM, Ingar Mæhlum Arntzen <ingar.arntzen@gmail.com> wrote:
> 
> Hi Yosuke.
> 
> I'm very happy with these modification.
> 
> One more comment about 2) the recording bit. I generally don't see multi-device timing as limited to non-AV content. For instance, we have used it to record live audio streams simultaneously at different destinations - according to a shared "production" clock - and then to playback those same music frames according to a slightly time-shifted "consumption" clock at other destinations. So, distributed recording of AV is an important use-case for the timing CG.
> 
> I think however that in your spreadsheet -  the recording use case - coming from the STB world - is very much tied to activities of the STB. My point is perhaps that time-sensitive, distributed recording is perhaps even more valuable for broadcasters than for STB vendors. 
> 
> You are welcome to share this conversation with the IG members:)
> 
> Best,
> 
> Ingar
> 
> 2015-04-17 13:36 GMT+02:00 Yosuke Funahashi <yosuke@funahashi.cc>:
> Hi Ingar,
> 
> Thanks a lot for your comments.
> 
> On Thu, Apr 16, 2015 at 5:30 PM, Ingar Mæhlum Arntzen <ingar.arntzen@gmail.com> wrote:
> [snip]
> The multi-device timing group is set up with media sync - both single and multi-device - and this is correct. However, our use cases also show that multi-device timing is very much relevant for other categories. 
> 
> http://webtiming.github.io/timingobject/#use-cases-and-requirements
> 
> 1) - Second Screen Scenarios. Precise timing is essential to secondary screen scenarios, not only to the presentation, but also for production of timed content. Time-shifted second screen stuff for on-demand viewing is also an important use case for timing.
> 
> I think precise timing is essential to some second screen scenarios. This is because there are already second screen services in market with rough synchronization.
> 
> I agree with that the second screen scenarios functional area is in scope of the CG.
> 
> 2) - Downloading/Recording/Time-shifting. Timing does not apply to downloading, but it does apply to recording and time-shifting. Here we are not thinking of recording a TV show on the STB, but other forms of recording - like a broadcaster recording (timestamping) live input channels (Twitter, sensors, ect) for time-shifted replay. 
> 
> I understand your point. The functional area is of the current Hybrid TV standards such as HbbTV2.0 and Hybridcast2.0, which doesn't contain the recording/time-shifting feature of non-AV streams. To make it clear, I'd like to add sub-areas to the functional area: AV content and non-AV streams (Twitter, sensors, etc.) I'll give the CG green background color for the latter sub-area.
> 
> I'll also add a new background color which means 'missing from current Hybrid TV standards,' putting it to the 'non-AV live streams' sub-area.
> 
> 3) - Stream Event Trigger is placed under broadcast signal. Timed triggers have traditionally been understood as something to put into the broadcast signal. However, with multi-device timing a very attractive alternative is enabled, where event triggers are transported independently over IP/Web, and presented at the correct time by say a HTML overlay or a browser engine within the STB. 
> 
> Good points. Again, the dev meetup focused on the functional areas of the existing Hybrid TV standards. This is why the devs didn't include out-band mechanisms. I think adding out-band mechanisms as functional areas or sub-areas to this table is beneficial for both Web and TV stake holders.
> 
> I'll change the name of functional area 'Broadcasting Signal' to 'Functions Traditionally Through Broadcasting Signal', and add 'in-band' and 'out-band' sub-areas where needed.
>  
> 4) - Timed Text. The same is true for timed text. Traditionally these are bundled with the broadcast stream, and played out by a player within the tuner/STB. This naturally implies that timed text media formats must be standardized so that players can understand them. Again, multi-device Web timing offers an alternative approach where timed text is simply presented by a timing sensitive Web-based triggering mechanism. This means that there is a lot more flexibility in how timed text is represented, transported and visualized, by one device or as many as you would like. It also means that you can have as many alternative text tracks as you like, and they can even be dynamic. If the timed text is hosted by an appropriate Web service you may author or modify timed text just-in-time or during the presentation.
> 
> The Timed Text functional area covers both in-band and out-band streams. I think it's relevant to make this area in scope of the CG, because precise timing always matters for subtitles, the primary use case of timed text in TV-like services. I'll also make it clear that this functional area covers both mechanisms.
>  
> [snip]
> In a broader perspective - I think the lack of distributed-timing is one of those hidden assumptions that have silently shaped current technologies and categorizations. Now with distributed timing on the table there is an opportunity to actively re-think traditional approaches to many problems in this domain. We have been doing that for the last 5 years, so we know it's worthwhile :)
> 
> Agreed. That's in line with what the IG members were talking about the multi-device/stream use cases last year,. I'm really happy that the CG is now working on the use cases from a broader perspective. :-)
> 
> What do you think about these modifications?
> 
> Additionally, would it be okay for you that I share this or following email with the IG member on the public ML after we made a consensus and modified the table along with it? I think your comments are highly valuable for the IG members, and reading this conversation will help the members understand the motivations of modifications and stimulate them to come up with additional ideas to improve the table.
> 
> Regards,
> Yosuke
>  
> Best,
> 
> Ingar Arntzen (chair multi-device timing community group)
> 
> 
> 2015-04-15 17:02 GMT+02:00 Yosuke Funahashi <yosuke@funahashi.cc>:
> 
> Hi all,
> (CCing the TV Control API CG)
> 
> Regarding the WebTV App developer meetup in Tokyo last month, I've just finished creating a takeaway document, 'A Rough Mapping Between Hybrid TV Functional Areas and W3C Spec/Group Scopes,' which reflects the preliminary discussion we had in the meetup before we dig into the TV Control APis editor's draft. [1]
> 
> PDF version:
> https://www.w3.org/2011/webtv/wiki/images/3/3f/A_Rough_Mapping_Between_Hybrid_TV_Functional_Areas_and_W3C_Spec_Group_Scopes.pdf
> 
> Living version:
> http://goo.gl/KEhXPd
> 
> Thanks to Daniel, Dewa-san, and Giuseppe for the advices to improve the document.
> 
> Any comments would be highly appreciated.
> 
> Please feel free to edit the living version directly.
> 
> I hope this document helps us in getting better Hybrid TV convergence by open Web standards in both short- and mid-term.
> 
> Regards,
> Yosuke
> 
> 
> [1] https://lists.w3.org/Archives/Public/public-tvapi/2015Mar/0030.html
> 
> -- 
> Yosuke Funahashi
> co-Chair, W3C Web and TV IG
> Web Media Specialist, W3C
> Project Associate Professor, Keio University
> 

—
Yosuke Funahashi
co-Chair, W3C Web and TV IG
Web Media Specialist, W3C
Project Associate Professor, Keio University

Received on Tuesday, 21 April 2015 06:54:40 UTC