W3C home > Mailing lists > Public > public-web-and-tv@w3.org > June 2014

[minutes] Web and TV Interest Group meeting - 11 June 2014

From: Daniel Davis <ddavis@w3.org>
Date: Wed, 11 Jun 2014 10:27:28 -0400
Message-ID: <53986750.80201@w3.org>
To: "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
Hi all,

Thank you for your time in the call. The minutes are here:

and pasted as text below.




      [1] http://www.w3.org/

                               - DRAFT -

                Web and TV Interest Group Teleconference

11 Jun 2014

   See also: [2]IRC log

      [2] http://www.w3.org/2014/06/11-webtv-irc


          Yosuke Funahashi, Mark Vickers, Mark Sadecki, Janina
          Sajka, Clarke Stevens, Paul Higgs, Glenn Deen, Bin Hu,
          Wu Wei, Cyril, Daniel Davis





     * [3]Topics
         1. [4]AC meeting update
         2. [5]Use-case gathering
     * [6]Summary of Action Items

   <trackbot> Date: 11 June 2014

   <Bin_Hu> azkim, aaaa is me

   <scribe> scribenick: ddavis

   <scribe> scribe: Daniel

   <jcverdie> Regret+ JC_Verdie

   yosuke: We've just finished the AC meeting and today we have
   the AB meeting so a relatively small number of people on this
   ... Now we're in the second round of gathering use cases and
   ... Firstly, we should share some information from the AC
   meeting that's relevant to this IG

AC meeting update

   yosuke: We reported the outcome of the Munich TV workshop
   ... There was a good response from the audience. Some of them
   were interested in the actual service of using HTML5 and
   interactive TV programs.
   ... We also had an update from W3C Team about testing
   ... W3C Team reported the first round of the testing activity -
   Test the Web Forward is working well but funding is not working
   ... Now they'd like to continue to work on crowd sourcing.

   ddavis: Today I'm going to speak to W3C local offices about TV
   activities in W3C

Use-case gathering


      [7] http://www.w3.org/2011/webtv/track/actions/open?sort=due

   yosuke: Let's look through the issue tracker

   <scribe> ACTION: 194 to Clarify use case 1 description
   [recorded in

   <trackbot> Error finding '194'. You can review and register
   nicknames at <[9]http://www.w3.org/2011/webtv/track/users>.

      [9] http://www.w3.org/2011/webtv/track/users%3E.

   <yosuke> [10]http://www.w3.org/2011/webtv/wiki/New_Ideas

     [10] http://www.w3.org/2011/webtv/wiki/New_Ideas

   Use case 1 -


   ddavis: I added a bit more explanation to clarify the use case
   ... I added this text: " In other words, the user device could
   extract information in the broadcast feed to retrieve
   additional data related to the content being played."
   ... Maybe it's better to have two use cases

   yosuke: I think so

   PaulHiggs: It's possible that the web app could be running on
   the TV itself rather than a second device.
   ... I thought we would say the ability of a web app to detect
   the TV program that's being viewed.
   ... Then the second device could extract info from the
   broadcast feed.

   Glenn_d: What you could extract from the audio stream is a
   registry identifier or ad ID (inside the watermark).
   ... Once you have that, you can resolve it to get the show and
   episode identifier.
   ... It won't tell you where you're watching but what you're

   PaulHiggs: The example is correct but the second sentence of
   the description doesn't seem to be correct.
   ... You could add something about listening to its audio
   information to get further information from the internet.

   Glenn_d: There are two ways of doing this - either there's a
   watermark you extract or you're going to take the audio and
   match it against an existing fingerprint.

   PaulHiggs: That's why it needs to be two use cases.
   ... Watermarking is non-audible tones that the tablet can hear.
   Fingerprinting is listening to a clip and comparing it to a

   Glenn_d: There's also a third case - the option that the TV
   device may have some fingerprinting tech built-in. It can
   notify the tablet of what's being watched.

   ?: Samsung TVs have a watermarking feature that can do this.

   PaulHiggs: Is that audio or something else?

   <Glenn_d> i'm not on the call

   <Glenn_d> np :)

   PaulHiggs: So in that third case you're saying the TV is doing
   the detection but not using audio.

   Glenn_dd: Correct.

   ddavis: Is just two use cases enough?

   PaulHiggs: I had thought maybe we don't need the third one -
   it's mostly about getting the information on the companion

   Glenn_dd: There is one difference - the industry uses a
   6-second clip for fingerprinting and can tell where in the show
   I'm watching.
   ... Whereas with watermarking is just an identifier for the

   Clarke: Fingerprinting is taking information that's already
   there are characterizing it.
   ... Watermarking is added inaudible info to the stream to
   identify the source or add e.g. copyright information.

   PaulHiggs: The watermarking technology could do more that just
   identify the show.

   yosuke: The benefit of fingerprinting is you don't need to add
   info to the stream beforehand.
   ... If you add more computing resources you can get more
   precise info of where you're watching.
   ... For the use case, when you're listening to the stream,
   without the original provider adding metadata or audio
   watermarking, you can use a fingerprinting service to get
   program info.
   ... For watermarking, the video creator can add anything they
   like within the stream and the receiver can decode that info.
   ... The service provider can deliver additional info to

   Glenn_dd: Some places have discussed a hybrid system - a
   watermark to obtain the show and a fingerprint to sync where
   you're watching.

   <scribe> ACTION: ddavis to split up use case 1 into
   watermarking and fingerprinting. [recorded in

   <trackbot> Created ACTION-200 - Split up use case 1 into
   watermarking and fingerprinting. [on Daniel Davis - due

   Action - ddavis to Check current ability to change playback
   rate for html media element.

   <trackbot> Error finding '-'. You can review and register
   nicknames at <[13]http://www.w3.org/2011/webtv/track/users>.

     [13] http://www.w3.org/2011/webtv/track/users%3E.


     [14] http://daniemon.com/tech/html5/playbackRate/

   ddavis: This was already in the spec so I removed the use case.

   janina_: This is very useful, e.g. for immigrants, and can help
   a lot of people.

   Action 196 - Split synchronisation use case (#4)

   <trackbot> Error finding '196'. You can review and register
   nicknames at <[15]http://www.w3.org/2011/webtv/track/users>.

     [15] http://www.w3.org/2011/webtv/track/users%3E.



   ddavis: There are now two use cases - one where the media
   content is identical and played on multiple devices, the other
   where the media is different (but related) and synced across
   multiple devices.

   yosuke: Any comments?

   Action 197 - Classify the accessibility requirements into
   general ones and specific ones.

   <trackbot> Error finding '197'. You can review and register
   nicknames at <[17]http://www.w3.org/2011/webtv/track/users>.

     [17] http://www.w3.org/2011/webtv/track/users%3E.

   yosuke: This is mine. We had a talk with the HTML accessibility
   sub group during the AC meeting.
   ... Mark, Janina, Daniel, Kaz and I joined.
   ... We exchanged some thoughts about how we can work together.
   ... There are two actions we can take.
   ... One is improving new standards or specs, e.g. TV Control
   API, that are related to this Interest Group, adding new
   accessibility features.
   ... We can contribute to this.
   ... The other approach is that there are lots of external orgs
   using HTML5 and related standards.
   ... We can improve awareness for people who use W3C standards
   of what is best for accessibility.
   ... We could create guidelines for other SDOs to use or
   ... Accessibility is regulated and raising awareness is

   ddavis: There are already guidelines being worked on so we
   would not rewrite them - just add to them or somehow make it
   easier for TV-related organisations and companies to find and
   digest the existing guidelines.

   MarkS: I can see the benefit for tailoring guidelines for a
   particular industry.
   ... We'd have to spend time learning the needs of that
   industry. We could collaborate on this.

   janina_: I'm all for it if the group thinks it's worth doing.

   ddavis: What is goal for you?

   janina_: First goal is that all the material we have is fully
   supported. This material, e.g. captions which are enshrined in
   law, exist from before the web was born.
   ... Captions are already there and easy to use.
   ... We think the specs already support this.
   ... There may be cases where different devices display
   different things.
   ... Whatever the viewing setup, we want this work supported.
   ... The third goal is that people creating the user agents have
   guidance on how to use our resources.
   ... If we can have a example video that has every accessibility
   component that we suggest, that would be good.
   ... We've had some good response from education because they're
   under US law to provide accessible materials including videos.
   ... I think it's achievable.

   Clarke: I'm doing some work on broadcast and cable side on
   descriptive video. I should sync up with your group at some

   <MarkS> Clarke, ping me at mark@w3.org and I will send you the

   yosuke: We could talk about the video in the IG, but we need
   significant contributions.
   ... For now I'm not sure how many members can contribute.

   MarkS: This part was a broader goal.

   yosuke: One idea is that currently hybrid TV is an important
   topic. There are lots of different delivery methods but one
   single viewing experience.
   ... That requires different standards from different
   ... That's also a challenge because we need to compile several
   different things. There's no comprehensive study about how to
   maintain accessibility in such an environment.
   ... The media accessibility guidelines (to be published) could
   cover this.
   ... In this IG and other SDOs, people are creating new hybrid
   standards. If we can check what problems the hybrid systems
   have, that could help a lot of people.
   ... Any further comments?
   ... Let's firstly create some notes to clarify the goals of the
   clarification between the TV IG and the media accessibility sub
   ... We can refine this on the mailing list and then work
   together efficiently.

   Action 198 - Ask timed text wg about 4k affecting captioning.

   <trackbot> Error finding '198'. You can review and register
   nicknames at <[18]http://www.w3.org/2011/webtv/track/users>.

     [18] http://www.w3.org/2011/webtv/track/users%3E.

   ddavis: Sorry I haven't done this yet.

   Action 199 - And yosuke to create questionnaire for 4k
   stakeholders about web standards issues.

   <trackbot> Error finding '199'. You can review and register
   nicknames at <[19]http://www.w3.org/2011/webtv/track/users>.

     [19] http://www.w3.org/2011/webtv/track/users%3E.

   yosuke: I wrote an email to Daniel suggesting we start within
   W3C members, and after that ask externally.
   ... 4K is a compression technology so the work is also done in
   MPEG and other SDOs.
   ... We can later ask external SDOs about their use cases and
   ... What groups should we ask about 4K?
   ... We already talked about asking the Timed Text Working
   Group. Should we ask e.g. HTML WG and its media task force?

   ddavis: I think so.

   yosuke: Maybe CSS WG - they're dealing with positioning and
   responsive media.
   ... Maybe SVG WG?
   ... And Media Accessibility Sub Group
   ... Any others?

   PaulHiggs: Is anyone looking at accessibility and subtitles on
   companion devices?
   ... I'm thinking of something like a teleprompter mode where a
   transcript scrolls on my tablet.

   Mark_Vickers: [inaudible]

   MarkS: I think you're saying current web tech already supports
   this functionality.
   ... We currently have the ability to sync video with a

   yosuke: Our situation with synchronisation is we need to define
   better standards for this.
   ... For example, if the main video is delivered using
   broadcasting, there is almost no buffer.
   ... But we may need additional synchronisation method. That's
   also the hybrid situation.
   ... The standard for this is not defined in W3C.
   ... It's important for accessibility but we don't have anything
   for this level of synchronisation.

   ddavis: So add it as a new use case?

   yosuke: Yes, but we already have lots of existing second screen
   use cases so maybe we can improve what we already have.
   ... Any other comments?

   ddavis: I think Janina has just sent a new use case.

   janina_: I think there are a couple of unique things.
   ... The use case is e.g. someone growing older who loses their
   ... They lose their hearing so that viewing movies/tv content,
   they can't follow the dialogue because of additional sound
   (gunshot, car noises, etc.)
   ... They could rely on captions but ideally they'd like to
   follow the audio soundtrack.
   ... In the UK there's a thing called clean audio.
   ... The notion is an alternative audio track which is primarily
   ... This is made available to the viewer.
   ... So for example, the viewer has an app on their mobile
   device which tells them when a clean audio track is available
   for them to listen to separately.
   ... This audio app is important because a hearing therapist can
   tune the app to the frequencies that are best for that
   particular viewer.
   ... This is a very important strategy.
   ... It could also be possible to use the same app in the cinema
   watching movies.

   yosuke: I think NHK also has similar research.
   ... They're keen to develop accessible technologies.

   janina_: I think we'd like to know about things like that. You
   could help us with footnotes and who in the industry is doing
   ... You could really help with us compiling the guidelines.

   yosuke: The industry would also like to improve society and get
   more recognition.

   janina_: I heard a US senator saying you can be in business and
   still do good - we should reward them.

   ddavis: Should we add that use case to our wiki page?

   PaulHiggs: To clarify, the clean audio is just the commentary.
   Is there some kind of mixing that could bring in some sound

   janina_: Yes, I think there is.
   ... The additional audio should not get in the way of the

   PaulHiggs: And the app can adjust the frequency for the hearing

   janina_: Yes, exactly.

   yosuke: Any other topics to discuss?

   <scribe> ACTION: ddavis to add clean audio use case to wiki
   page [recorded in

   <trackbot> Created ACTION-201 - Add clean audio use case to
   wiki page [on Daniel Davis - due 2014-06-18].

   yosuke: Some work is going on in the mailing list. Let's
   clarify our schedule.
   ... We'll can think of milestones - when we finalise things and
   do gap analysis.
   ... On the mailing list, IG members can make comments.

   ddavis: Sounds good to me.

   yosuke: Thank you for joining us. Let's talk in two weeks.

   Meeting adjourned.

Summary of Action Items

   [NEW] ACTION: 194 to Clarify use case 1 description [recorded
   [NEW] ACTION: ddavis to add clean audio use case to wiki page
   [recorded in
   [NEW] ACTION: ddavis to split up use case 1 into watermarking
   and fingerprinting. [recorded in

   [End of minutes]
Received on Wednesday, 11 June 2014 14:28:03 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:57:22 UTC