W3C home > Mailing lists > Public > public-wot-wg@w3.org > February 2020

Minutes from W3C M&E IG monthly call 4 Feb 2020: Bullet Chatting and Web of Things

From: Chris Needham <chris.needham@bbc.co.uk>
Date: Mon, 10 Feb 2020 14:04:31 +0000
To: "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
CC: "public-wot-ig@w3.org" <public-wot-ig@w3.org>, "public-wot-wg@w3.org" <public-wot-wg@w3.org>
Message-ID: <590FCC451AE69B47BFB798A89474BB366BA69C30@bgb01xud1006>
Dear all,

The minutes from the Media & Entertainment Interest Group call on Tuesday 4th February are now available [1], and copied below.

The slides from the Web of Things presentation are also available [2, 3].

Many thanks to Michael McCool and Michael Lagally for giving us a great introduction the Web of Things and starting the conversation about media use cases.

M&E IG members are invited to join the Web of Things architecture calls, which happen each Thursday. See [4] for schedule and connection details.

The WoT Working Group has recently rechartered, so this is a good time to input requirements for the WoT archtiecture.

The next M&E IG call is planned for Tuesday 3rd March, topic TBD.

Kind regards,

Chris (Co-chair, W3C Media & Entertainment Interest Group)

[1] https://www.w3.org/2020/02/04-me-minutes.html
[2] https://github.com/w3c/wot/blob/master/PRESENTATIONS/2020-02-WoT-Status..pptx
[3] https://github.com/w3c/wot/blob/master/PRESENTATIONS/2020-02-WoT-Status..pdf
[4] https://www.w3.org/WoT/IG/wiki/WG_WoT_Architecture_WebConf

--

W3C
- DRAFT -
Media & Entertainment IG
4 Feb 2020

Attendees

Present
    Yajun_Chen, Kaz_Ashimura, Chris_Needham, Kazuhiro_Hoya, Akihiko_Koizuka, Michael_Li, Song_Xu, Takio_Yamaoka, Peipei_Guo, Keiichi_Suzuki, Xabier_Rodríguez_Calvar, Gary_Katsevman, Nigel_Megitt, Zhaoxin_Tan, Daihei_Shiohama, Francois_Daoust, Huaqi_Shan, Garrett_Singer, Larry_Zhao, Michael_Lagally, Francois, Pierre-Anthony_Lemieux, Rob_Smith, Andreas_Tai, Tatsya_Igarashi, Will_Law, Fuqiao_Xue, Michael_McCool, Hiroki_Endo, Jonh_Riviello, Tomoaki_Mizushima, Ege_Korkan

Regrets

Chair
    Chris_Needham, Pierre-Anthony_Lemieux, Tatsuya_Igarashi

Scribe
    kaz

Contents

Topics

Agenda
    Bullet Chatting
    WoT Joint discussion
    2nd-gen WoT use cases

Summary of Action Items
Summary of Resolutions
<scribe> scribenick: kaz

# Agenda

Chris: Two topics for today: continue bullet chatting TF discussion (30 minutes), and then joint meeting with the WoT WG (60 minutes).

# Bullet Chatting

Song: There are 3 discussion threads on the mailing list, I'll share the links.
.... We've internally discussed in the CG, so we'll see if we can clarify.

<nigel> Song posted:

<nigel> https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0000..html Bullet chatting interchange format was: W3C M&E IG conference call: 7th January 2020

<nigel> https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0001..html Re: Bullet chatting interchange format was: W3C M&E IG conference call: 7th January 2020

<nigel> https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0007..html Bullet chatting questions (was RE: Bullet chatting interchange format was: W3C M&E IG conference call: 7th January 2020)

Chris: A clarification question: do all the use cases involve a web app running in a browser?
.... We want to understand whether a data interchange format like TTML/WebVTT is needed,
.... or possibly changes to CSS to enable the rendering.

https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0000.html

Pierre: We can we get a commitment to get answers on the mailing list,
.... since there are people from various timezones?

Chris: Yeah, sometimes there are audio issues on the calls too.

Kaz: Song and Huaqi, do you think you can provide some responses on the mailing list?

Huaqi: The interactive wall is a web page displayed on the screen. In practice, CSS is used.
.... The bullet chatting use case covers video and non-video, and the interactive wall is non-video.

Chris: Browser use case vs non-browser as client devices as well?

Huaqi: Yes, both.

Chris: Where is the interoperability requirement? Are there multiple different organizations involved that need to share data,
.... e.g., a service provider and an application provider? Understanding this will help us develop the right solutions.

Huaqi: Before standardisation, currently we use CSS in the implementation.
.... Standardization is intended to cover multiple applications, including video and non-video.
.... So if the CSS implementation is easier than the new data format, we think it depends.

Pierre: In the interactive wall use case, who would be the provider of information?

Song: We need explain the detailed requirements for most of the interactive wall scenarios in the market, like Bilibili or Dwango.
.... What kind of scenarios is the interactive wall for?

Pierre: Specifically, what drives the video wall? Is it a cellphone that casts?
.... Does the interactive wall include its own renderer?
.... We need to understand where the information is coming from and where it's rendered, so we can best pick the standards needed.

Song: Based on my experience of interactive walls at venues, there's video streaming from an on-site team,
.... and the interactive discussion application which could be from Bilibili or WeChat or other social network application.
.... They will put the live bullet chatting text rendered on top of the video stream.

Pierre: Where does the mixing (compositing) of the video and the bullet chat happen? Is that in the video wall, or somewhere before it?

Song: Basically, it's real time interaction so we deliver the bullet chat separately from the video stream,
.... they come from two streams to make it interactive.

Pierre: To clarify, which device generates the bullet chat video stream?

Song: From the consumer side, the application has bullet chatting area,
.... so on-site people can set the bullet chatting with the application,
.... and the people outside the venue can set the chatting in the application.
.... The back-end for the application (whether sponsored by Bilibili or TenCent), they are going to put this bullet chatting data on the screen separately, rather than integrate it into the video.

Pierre: The data between the client on each mobile phone and the backend is not a bullet chat stream, it's just the text coming from the users?
.... So the back-end takes all the information from the mobile phones and creates the bullet chat rendering, as a video, and the video is sent to the video wall?

Song: Yes, that's one possible mechanism.

Pierre: In that case there is no need for a bullet format, because the output of the back-end is a video stream that has the bullet chat composited,
.... and the input to the back-end is the user's typing.

Chris: My understanding was that these were delivered separately, so there's a client that does the compositing
.... and we're not rendering the bullet chat into the video, it's an overlay..

Pierre: The experience is being displayed on the video wall in the venue,
.... and the interface between the back-end server and the interactive wall is a video stream, not a bullet chat format?

Chris: So the question is: what is the device / app that renders to the interactive wall?

Pierre: Yes, if the video wall has a native bullet chat renderer, then I can see we need an interchange format to present the same interface to video walls from different vendors.

Kaz: So you're asking what's sent to the video wall, (1) separately or (2) integrated on the wall device.

Pierre: It would be good to understand this level of technical detail in the use case study.

Kaz: I suggest we, the Bullet Chat TF, should clarify when/who/what data is to be transferred within the use case description.

Song: Yes, we'll break down the steps in the interactive wall use case.
.... Let's continue with the next question.

https://lists.w3.org/Archives/Public/public-web-and-tv/2020Jan/0001.html Nigel's question

Nigel: Is there a requirement for a common interchange format for bullet chats that works interoperably across multiple applications?
.... If you have multiple client tools that allow people to enter bullet chats, and the data then needs to be sent to multiple pages running a standardised compositing system, especially when it's running in a web page rather than on a server, then I can understand the need for a common interchange format.
.... However, if these are all specific custom-written web applications that are presenting the bullet chat, based on the web page's own text entry input system, then there's no real need for interoperabiity, because each page does it it's own way.
.... I'm trying to understand at what point we need an interoperable format.
.... What joining-up is there of multiple parties where a standardised format is needed?

Huaqi: Yes, the format needs to support multiple client platforms.
.... We also need to support animation, video on demand, live streams, embedded images, and so on.

Nigel: CSS can do that now, so I'm more focused on the interoperabiity requirement.
.... For example, do you want an open market place for bullet chat contribution tools, where people enter their bullet chat comments,
.... and have this appear on lots of different bullet chat pages provided by different organizations?

Song: We can bring the question to the CG and then bring the answer back later.

Chris: That's fine. Please reply to the original questions on the mailing list and we can continue discussion there.

Kaz: We can start with Chinese vendors, and then ask Japanese vendors for opinions on their ideal format.

Song: OK

# WoT Joint discussion

<McCool> wot presentation: https://github.com/w3c/wot/blob/master/PRESENTATIONS/2020-02-WoT-Status.pptx

Chris: I'm happy to welcome members of the Web of Things WG and IG to discuss media-related topics with us.

Michael_McCool: I put together a presentation to give an update on WoT and talk about use cases.

mm: starting with what WoT is like

Michael_McCool: [W3C Web of Things]
.... I'll introduce WoT, the use cases we've looked at so far, and current status.
.... WoT aims to deploy open web standards in IoT to support various use cases.
.... We're generally targeting interoperabiity. With IoT we want to combine devices and services from different manufacturers.
.... There's two groups: WoT IG, for initial thoughts and ideas, explored many building blocks.
.... Our first workshop was about 5 years ago, and a second workshop last summer in Munich.
.... IG Charter renewal for another two years in Oct 2019.
.... The standards work is done in the WG. This also just rechartered.
.... In our last two year period, we developed two Proposed Recommendations.
.... There's a general architecture document, to understand the basic concepts of IoT and use cases.
.... There's also a normative reference for Thing Description, which is a metadata format for describing services and how to talk to them.
.... Several notes are also published around scripting, protocol bindings, and security.
.... We rechartered recently with some new work items. A couple of things are relevant to Media & Entertainment: eventing and discovery.
.... [W3C Web of Things - Building Blocks]
.... The WoT Thing Description is metadata describing a device, such as ID and description, and its network interface.
.... Its network interface includes access points for different protocols and data payload descriptions.
.... It's JSON-LD, so we allow vocabulary extensions, we allow semantic annotation to be added to a Thing Description,
.... for example, to describe what a particular network interaction means in the real world.
.... This is the main entry point to determine how to talk to a Thing in a protocol-independent way.
.... The protocol dependency is handled by a protocol binding, which describes how the high level abstraction from a Thing Description maps down to a concrete protocol such as HTTP or COAP.
.... There's also a scripting API, which is currently informative, it gives a standardised way to write IoT orchestration scripts that can easily consume Thing Descriptions and use the abstractions to do things with IoT devices.
.... The binding templates is an informative document describing how to use WoT for particular protocols like MQTT or HTTP.
.... The REC track documents are the WoT Architecture and WoT Thing Description. The others are Notes, so not binding.
.... We're adding more normative documents in the new charter. We may also promose some of the Notes to normative status later.
.... [Published Proposed Recommendations]
.... The WoT Architecture describes general constraints about Things, e.g., Things in the WoT architecture must have a Thing Description describing their interface.
.... And they must be hypermedia, so protocol endpoints need to have URIs and they need to support media types, etc.
.... These are high-level constraints that many protocols can easily satisfy..
.... The WoT Architecture provides a high-level abstraction about how a device operates. We distinguish between the "what" and the "how".
.... The "what" is, for example, reading a property, or invoking an action.
.... The "how" is, sending an HTTP GET request to a URL to read a property and receive a JSON response.
.... The idea is that for a scriping API, you only care about the "what", and the rest is handled under the hood by the library given the information provided in the TD.
.... The TD is a data model which is serialized as JSON-LD 1.1. This is a newer version that more or less supports idiomatic JSON.
.... Originally, JSON-LD was meant as a serialisation format for RDF, specialised for that purpose.
.... The newer 1.1 version is designed for taking exisitng JSON and annotating the semantics. It's more familiar to people used to regular JSON.
.... The only real difference is the addition of the "@context" block at the top, which allows you to define the various terms and vocabulary used.
.... We have a base vocabulary used for TDs, and you can have extension vocabularies,
.... e.g., from an external vendor like iotschema.org, which defines concepts and data models around IoT devices that are shared among many manufacturers.
.... For example, you could annotate a Thing such as a light to have certain properties and capabilities in the real world.
.... The rest of the TD is mostly a list of properties, events, and actions.
.... That information has a data model and mechanisms to access the protocol..
.... This example is a simple data model of "integer" (for brightness). In general we use a large subset of JSON schema to describe the data model.
.... Data models can also apply to different data payloads, so if they can be converted to JSON, it's compatible with the data mode so we can use XML or CBOR as alternative payloads.
.... You can have unstructured payloads, e.g., a property that's an image, it's just a media type.
.... [Published WG Notes]
.... We have discussed privacy. WoT Security and Privacy Guidelines
.... We have defined a data format, but not how it's distributed exactly.
.... We'll next be looking at how to distribute TDs in a privacy-preserving way.
.... The scripting API is how to orchestrate IoT devices, we need to work on where this gets deployed.
.... We're working on WoT Binding Templates. We have a few basic protocols, and now we're looking at others like OCF, OPCUA, to extend the definition of where the TD can be applied.
.... [Status and Key Design Decisions]
.... We adopted JSON-LD 1.1, although it's not a REC yet. We've worked with the JSON-LD WG, and all the features we depend on will be in the spec.
.... We have some built in security data that we support. This is extensible, but for now focused on HTTPS and, to some extent, COAP.
.... We're looking at basic security mechanisms like OAuth2, it's extensible so you can add other security configurations as needed.
.... The security metadata is public metadata describing the protocol, so doesn't provide keys.
.... The TD is open to add an extension to include, e.g., security schemes, an annotate their function as a device.
.... [Use Case Overview]
.... We've studied a lot of use cases in the architecture document.
.... This is the summary diagram that combines the use cases we've looked at, patterns of deployment.
.... We've looked at synchronising state between two services in the cloud at the edge.
.... I may have a device in the local network and I want to use it from a gateway and do IoT orchestration.
.... I may want to proxy a local device and make it available in the cloud.
.... I may want to use a device from a web app on a phone through a gateway.
.... There's various use cases where the TDs can be used to facilitate and abstract the interfaces to make them automatable and more convenient, with less special cases and more using a general abstraction.
.... We looked at more specific concrete use cases as well, during plugfests and workshops.
.... [WoT Workshop: Munich 2019]
.... At the Munich workshop we looked at many use cases: industrial, automotive, smart home.
.... Mozilla and Siemens. BMW were there showing a connected car interface.
.... [PlugFest]
.... We also have regular plugfests, typically three times a year. The last one was at TPAC.
.... I'll focus on a couple of applications related to media from this plugfest.
.... Media is a part of IoT systems, as an input/output device, or as part of a larger system,
.... or as a dashboard to control all your things.
.... [Plugfest Devices]
.... (diagram which includes devices, cloud and applications)
.... At the plugfest we had several different use cases plugged together.
.... We had local gateway things and things running in the cloud.
.... We want to support both local access to devices as well as cloud-mediated use cases, also remote access.
.... This shows a remote system being controlled over the network,
.... we had cloud systems doing digital twins, and local control using IoT orchestration.
.... [Scenario 1: Home/Building]
.... This looked at access to devices, lights, energy management, etc.
.... The Mozilla gateway is able to bridge devices and do role management.

<cpn> https://iot.mozilla.org/gateway/ Mozilla WebThings Gateway

Michael_McCool: [PlugFest photos...]
.... We had a noise source to drive a sound sensor. This photo shows Mozilla WebThings gateway and connected LED.
.... Mozilla have a project looking at a Web of Things gateway system that can provide local control of devices in the smart home without having to go through the cloud.
.... It could control a TV set, the TV provides a dashboard via the TV's web browser to control things in the home.
.... It could run on an ISP's gateway or a separate box.
.... [Scenario 2: Industrial]
.... Oracle has been involved in digital twins and factory management systems.
.... This could also be applied to media systems.

<cpn> https://docs.oracle.com/en/cloud/paas/iot-cloud/iotgs/oracle-iot-digital-twin-implementation.html Oracle Digital Twin Implementation

Michael_McCool: [Smart car charging]
.... [NHK Hybridcast Integration]
.... Japanese public broadcaster NHK have an example of IoT integration with Hybridcast for eventing. You can use use a data channel in the broadcast to broadcast data of various kinds
.... You can use the data to control or influence devices in the smart home. An example, you're watching a TV show and you want to set the ambient lighting level or colour to complement the programme,
.... You can send data about the desired intensity and colour of the light over the broadcast and use a gateway to control the lights.
.... Related, you could do interactive TV applications with remote controls. The example shown at TPAC 2019 in Fukuoka was a broadcast with data channel (visualised on the TV), light colour, room temperature, control devices to meet those preferences.
.... In this context the data channel is another IoT device, orchestrated with IoT devices in the home.
.... [Orchestration]
.... Example shows scripts using the WoT Scripting API. There's an open source implementation from the thingweb project

<cpn> http://www.thingweb.io/ Thingweb

Michael_McCool: Node-RED provides graphical orchestration.
.... We've also looked at the W3C multimodal interaction use cases, which include various ways to use media.

<cpn> https://www.w3.org/TR/mmi-use-cases/ Multimodal Interaction Use Cases

Michael_McCool: For example, offloading user interactions to nearby devices, such as a web app with large text displayed on a TV set,
.... after discovering the TV, that it has a text rendering capability.
.... Another use case is for blind users, who needs text to speech, you may discover a speech interface and send information for described video to it.
.... You can use IoT devices to notify the user of events, e.g., a doorbell could trigger something on the phone.
.... These can be done today using some cloud-mediated services.
.... But today, systems tend to be vertically integrated and we're trying to get to a more open ecosystem where you can plug and play services from different vendors.
.... [W3C WoT Resources]
.... Links to WoT wiki, WoT IG, WoT WG.
.... A new Charter for the WoT WG is available

https://www.w3.org/2020/01/wot-wg-charter.html new WoT WG Charter

Michael_McCool: [WG Charter Proposal: Work Items]
.... Our new charter includes some items that extend the current specs, including link relations, cleaning up some protocol issues,
.... but of most interest to media are interoperability profiles, discovery, and complex interactions such as eventing.
.... Interoperability profiles resolves a tension we have around wanting to support future protocols, as IoT is constantly evolving. We also want to know that things will work out of the box.
.... To do that, there needs to be a limited set of expected protocols by an implementation, but we want to keep it open-ended and extensible as new things evolve.
.... The way we're addressing this is via Interoperability Profiles, which has a finite set of requirements that is implementable.
.... If you satisfy a profile, you'll have plug and play interoperability. Mozilla and others are interested in this.
.... The other piece is observed and complex interactions. Eventing is a difficult problem, with HTTP there's multiple ways to do it, e.g., Web Sockets where you need to know about the protocol that runs on Web Sockets to be able to do eventing.
.... How to document this and what to standardise, to make eventing be generally useful?
.... There are standardised ways to do eventing now, such as long polling, server-sent-events. We're looking at additional mechanisms.
.... We also want to look at how you can use hypermedia controls do to complex action management. These don't necessarily need a standard, but we do need some conventions to explain how to deal with them.
.... Discovery is a new topic, how to get access to the Thing Description? How do I find the services that are available?
.... How do I do that in a privacy-preserving manner?
.... We also want to support local and global contexts for discovery, e.g, on a LAN behind NAT, or in a public space such as a mall, or cloud services, smart city.
.... We need to be very careful about making this process preserve privacy, so we can't just broadcast TDs, we have to first need to check the user has rights to access that data.
.... This needs a multi-stage process, to get initial information on first contact, then detailed metadata, and we need access control between these two steps.
.... We started work on initial drafts, still early days.
.... Another thing related to privacy is identifiers. The TD has an ID field that's a controversial issue as it relates to tracking. If you have the device IDs and you can associate those with people, then you can track people..
.... We're talking to the DID Working Group about how to manage distributed indentifiers in a privacy preserving way.
.... Need to work on how this works. Who has access to identifiers, how are they updated, temporary IDs, etc?
.... That's all I have. Any questions?

Kaz: Maybe NHK guys want to provide some comments on their use case?

Endo: Michael provided enough information already.

Michael_McCool: OK. We used the multimodal interactions as a starting point, but that doesn't cover everything.
.... There are many use cases for media. There's a CG working on supporting local connections via HTTPS, handling certificates etc in a peer to peer context.
.... This important for certain cases like using a mobile phone to control a local device over HTTPS.

<cpn> https://www.w3.org/community/httpslocal/ HTTPS in Local Network Community Group

Chris: Has there been any work done so far on IoT devices for stream audio/video, e.g., sensors and/or cameras?

Michael_McCool: We have done cameras, but only single image capture so far.
.... We don't directly support streaming except to point to the entry point for a streaming interface.
.... The idea is we'd then hand off to something that supports streaming.
.... It's more about managebility of devices, selecting a channel or volume, etc.

Chris: So I could use a TD to advertise an HLS stream at a given location?

Michael_McCool: Yes, then you might have actions to control the stream,
.... but we'd need to do more work building proofs of concept, it's technically feasible though.

Chris: Something we've been working at BBC is protocols for use in a studio environment for media production.
.... We're moving away from using fixed infrastructure to make it more software configurable, endpoints for capture devices such as cameras, other things that can consume streams like a vision mixer.
.... There are industry looking at discovery mechanisms and protocols. This generally runs in a managed network environment, there are issues around scalability when there are lots of devices.

Michael_McCool: We've looked at both managed environments and open environments.
.... For managed environments we know in advance where everything is, so there can be a database with all the TDs. That's our current supported use case.
.... The more open environment is where we want to get to.

Kaz: The plugfests are focused on managing the TDs, device management and capability control.
.... A video stream is not really handled by the WoT mechanism itself. We should think about how to integrate video streaming from the TV/video side.

Michael_McCool: Yes, we want to use existing standards and provide access to them.
.... We'd need to explore those use cases and make some PoCs to identify issues.

Michael_Lagally: We have been discussing two different sets of use cases:
.... Audio/video streaming, where there'd need to be content negotiation to figure out which signalling mechanisms, media streams, formats, etc.
.... The other one is interface to devices for querying capabilities and controlling a rendering device.
.... control rendering devices
.... Is there any work on discovery, control and rendering of playback streaming devices?

Chris: Yes, from a W3C point of view, maybe most interesting is Open Screen Protocol,
.... being developed by the Second Screen WG.

<cpn> https://w3c.github.io/openscreenprotocol/ Open Screen Protocol

Chris: Support use cases like Chromecast. From a browser you can discover compatible screens available, then exchange capability information,
.... then hand over playback of the video stream to the other device.

<tidoust> I note that the Open Screen Protocol also includes provisions for the Remote Playback API that includes control actions (play/pause/seek).

Chris: This protocol design faced some of the same issues of privacy and security.
.... It's designed for a local network environment, users have an expectation of privacy.
.... So there's a minimal level of information shared before the security layer is established.
.... This may be useful for the WoT group to look at.
.... The other thing, not discussed in MEIG recently, is UPnP/DLNA, which is well established.
.... This allows you to control playback of media in your media collection. There's a whole protocol suite to support these use cases.
.... Have you looked at this?

Michael_McCool: We have a liaison with OCF

<cpn> https://openconnectivity.org/ Open Connectivity Foundation

Michael_McCool: We've been working with people at IETF, looking a DHCP and DNS extensions.
.... DNS is interesting, as it supports non-local search for things in a certain location, e.g., in a city. But that's more an IETF liaison thing.
.... One of the challenges with discovery is that there's multiple ways to do it.
.... Our approach is re-using existing mechanisms as the first-contact mechanism, to discover and authenticate to a directory service.
.... You can only get access to the metadata after authenticating to the service.
.... We also want to ensure minimal information is distributed in the first contact protocol.

Michael_Lagally: Coming back to other use cases, UPnP and second screen are things where the user has complete control.
.... In DVB there's been a lot of work around second screen and time synchronisation of different media streams across devices.
.... These use cases seem to be consumer-centric, so everything happens under control of the user.
.... Are there also broadcaster use cases, something for streaming providers?

Chris: There are the production use cases I mentioned, within a studio environment.
.... There's the Advanced Media Workflow Association (AMWA) group developing the Network Media Open Specification (NMOS).
.... These are very much professional media use cases, discovery and control of cameras and production equipment.

<cpn> https://www.amwa.tv/nmos AMWA NMOS

Chris: So there's two broad cateories: media production and between content provider and consumer.
.... In the consumer case, we have experimented with use cases such as the ambient lighting for particular content, I'm not sure if there are deployments.

Michael_McCool: There's also other data, such as information about the programme that you may want to use in other ways.
.... Such as who is on the screen, described video, subtitle information can be used.
.... There's also a question of reverse data, sending information for interactive TV, e.g., voting, polls, etc.
.... One question too is: systems like an ISP box that runs apps like Netflix, these can save video for playback, etc. How do these kinds of services fit in?

Michael_Lagally: I want to find out whether there is some interoperability issue where it could help to have a common way of describing interfaces, behaviour device categories,
.... and integrating these things for wider benefit.
.... Is there a use case where appling the Thing Description across different stakeholders would add value, for either content provider or content consumer?

# 2nd-gen WoT use cases

Michael_Lagally: I can talk about the latest use cases we've been working on in the architecture group.
.... We've been looking at various application domains: consumer, industrial, transportation, and deriving common patterns from them.
.... There are device controllers, thing to thing communication, etc. Most of these aren't specifically excluding media streams, but none them put media streams at the core of the use case.

<mlagally> https://github.com/w3c/wot-architecture/tree/master/USE-CASES

Michael_Lagally: This document was started several weeks ago, we're updating the use cases.
.... We're working at the moment on a lifecycle diagram for WoT systems.
.... We're adding digital twin and multimodal use cases.
.... There's an initial set of use cases that are in scope.

Michael_McCool: We've just started our new charter, so we're in start up mode, we're collecting use cases and discussing which are in scope.
.... We're trying to be more precise about use cases.

https://www.w3.org/2020/01/wot-wg-charter.html new WoT WG Charter

Michael_Lagally: It would make sense to get additional input from you.
.... We have the WoT Architecture calls every Thursday, at 8am CET and 4pm CET.
.... We're holding two separate calls so that people from various timezones can join the calls.
.... It would be great if you could also participate, so we address the right things.

Chris: This has been a great introduction, thank you for presenting to us.
.... I want to encourage our MEIG members to think about the use cases we would want to bring to the WoT WG.
.... The TD can bring interoperability with different protocols and capabilities.
.... It's possibly related to our media production topic.
.... There may be other industry groups to reach out to.

Kaz: Yeah, as I'm team contact for MEIG and WoT WG I'd suggest we continue this kind of collaborative discussion.

Chris: I agree, the MEIG includes several members who are consumer electronics manufacturers, not just media devices but connected home also. I wonder what their point of view is.
.... Kaz, we could follow up with some of our members.

Michael_McCool: Possibly we could have more joint calls in future,
.... and people could join the WoT Architecture calls directly,
.... and report back when there's more details to discuss.

Chris: That sounds like a good plan.
.... I'll write something to share with the MEIG members, to see who would be interested to participate.

Michael_McCool: Lagally, can you show the WoT Architecture wiki which includes the telco information?

Michael_Lagally: OK

<mlagally> https://www.w3.org/WoT/IG/wiki/WG_WoT_Architecture_WebConf

Michael_Lagally: WoT Architecture TF wiki above

Chris: Thank you all for joining today, and I hope our members will join, and look forward to hearing progess.

[adjourned]

Summary of Action Items
Summary of Resolutions
[End of minutes]
Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2020/02/10 01:57:49 $
Received on Monday, 10 February 2020 14:05:55 UTC

This archive was generated by hypermail 2.4.0 : Monday, 10 February 2020 14:05:56 UTC