RE: Proposal of new requirements for the Web-based signage toward W3C standard

Dear Web Signage BG,

Following on discussions in the MMI WG, I’ve put together this example of how the Multimodal Architecture might be applied to a Web Signage use case. I hope this is useful, and we would be happy to answer any questions or have some discussions about how the MMI work might be relevant to your use cases.

Best regards,

Debbie Dahl

 

The MMI Architecture Recommendation[1] and the Discovery and Registration Working Draft [2] seem very relevant to Web Signage use cases. This  message briefly describes the MMI Architecture, illustrates how it would work for a specific Web Signage use case and then describes the goals of the work on Discovery and Registration.

The MMI Architecture defines a set of high-level life-cycle events for communication among the components of a multimodal system. The components include a controller (an Interaction Manager or IM) and various components that do things like present information to users and analyze user inputs (these are called Modality Components or MC's). The life-cycle events (with names like "StartRequest", "CancelRequest", "PauseRequest" and so on) don't require a particular form of transport, although there are some HTTP examples used in the spec for illustration. Web Sockets could also be used.

In a possible Web Signage use case, the user approaches a display in a mall which also includes a camera and a motion detector. The camera identifies the user as an adult female and through this identification the display changes to show advertising that the user might be interested in. Or the camera identifies the user as a young child without a nearby adult, and notifies the mall security staff about a possible lost child.

If this hypothetical system were based on the MMI Architecture, the following sequence of events would occur (somewhat simplified for clarity).

1.       The motion detector (an MC) detects that someone is in range of the display

2.       The motion detector MC sends  a "NewContextRequest" message to the IM, which could be implemented in SCXML[3] and which happens in this example to be located on a server. This event notifies the IM that a new user has arrived, and that it should start a new interaction

3.       The IM sends a "StartRequest" event to the camera (another MC) that includes application-specific instructions in its “Data” field to take a picture of the user. 

4.       The picture is taken and sent back to the IM as an EMMA [4] message contained in a DoneNotification event. The EMMA message is time-stamped so that the IM knows when the picture was taken. This is useful, for example, if the application wants to take time information into account in deciding how to present the display to the user (if it’s close to dinnertime, offer the user a restaurant coupon). 

5.       The IM sends the image to a web-based image analysis service (this service is also an MC, although it doesn't directly interact with the user).

6.       The image analysis MC identifies the age and gender of the user and sends this information back to the IM in another DoneNotification event containing another EMMA message. The new EMMA message contains the interpretation of the image as showing one person classified by age and gender

7.       The IM application logic determines what to display or what to do, based on the age and gender of the user.

8.       If the user is an adult female, the IM sends a StartRequest to the display containing HTML to display on the digital sign.

9.       If the user is a young child without an adult the IM sends an alert to mall security with the picture of the child and the location of the display. 

The above use case doesn't make use of the user's mobile device, but many Web Signage use cases do. In those cases, the system consisting of the user's device and the display has to be dynamically configured in real time as the user comes within range of the display if the user wishes to interact with the display using their device. The new Discovery and Registration work is designed to address the configuration of dynamic systems, where it is necessary to find new system components, add them to the system, identify their capabilities, monitor their state during the interaction, and remove them from the system when they are not needed or become unavailable. The Discovery and Registration work includes Use Cases and Requirements [5] and a Working Draft of a specification [2].

 

1. Barnett J, Bodell M, Dahl DA, Kliche I, Larson J, Porter B, Raggett D, Raman TV, Rodriguez BH, Selvaraj M, Tumuluri R, Wahbe A, Wiechno P, Yudkowsky M (2012) Multimodal Architecture and Interfaces. World Wide Web Consortium.  <http://www.w3.org/TR/mmi-arch/> http://www.w3.org/TR/mmi-arch/. Accessed November 20 2012

2. Rodríguez BH, Barnett J, Dahl D, Tumuluri R, Kharidi N, Ashimura K (2015) Discovery and Registration of Multimodal Modality Components: State Handling. World Wide Web Consortium.  <https://www.w3.org/TR/mmi-mc-discovery/> https://www.w3.org/TR/mmi-mc-discovery/. 

3. Barnett J, Akolkar R, Auburn RJ, Bodell M, Burnett DC, Carter J, McGlashan S, Lager T, Helbing M, Hosn R, Raman TV, Reifenrath K, Rosenthal Na (2015) State Chart XML (SCXML): State Machine Notation for Control Abstraction. World Wide Web Consortium.  <http://www.w3.org/TR/scxml/> http://www.w3.org/TR/scxml/. Accessed February 20 2016

4. Johnston M, Dahl DA, Denny T, Kharidi N (2015) EMMA: Extensible MultiModal Annotation markup language Version 2.0. World Wide Web Consortium.  <http://www.w3.org/TR/emma20/> http://www.w3.org/TR/emma20/. Accessed December 16 2015

5. Rodriguez BH, Wiechno P, Dahl DA, Ashimura K, Tumuluri R (2012) Registration & Discovery of Multimodal Modality Components in Multimodal Systems: Use Cases and Requirements. World Wide Web Consortium.  <http://www.w3.org/TR/mmi-discovery/> http://www.w3.org/TR/mmi-discovery/. Accessed November 26 2012

 

 

From: Kazuyuki Ashimura [mailto:ashimura@w3.org] 
Sent: Thursday, February 11, 2016 4:50 AM
To: Bassbouss, Louay
Cc: Tanaka(田中清) Kiyoshi; Futomi Hatano; public-websignage@w3.org; Fuhrhop, Christian; Steglich, Stephan; Deborah Dahl; B. Helena RODRIGUEZ; Raj (Openstream)
Subject: Re: Proposal of new requirements for the Web-based signage toward W3C standard

 

On Thu, Feb 11, 2016 at 6:35 PM, Bassbouss, Louay <louay.bassbouss@fokus.fraunhofer.de> wrote:

Hi Kaz, 

 

Please find my comments inline.

 

Thanks a lot, Louay :) !

Kazuyuki

 

 

Thx,

Louay

 

On 09 Feb 2016, at 08:22, Kazuyuki Ashimura <ashimura@w3.org> wrote:

 

Hi Louay and Kiyoshi,
CCing Debbie, Helena and Raj from the MMI WG

Thanks a lot for updating the draft Charter, Kiyoshi!
And thanks a lot for your thoughtful comments, Louay!

I have some comments to you as follows :)

----------------------
1. Comments for Louay
----------------------

Louay, it seems you're interested in:
- how to handle the lifecycle of the user agent (UA)
- how to manage the state transition of the UA
- common data format/vocabulary for data exchange

Exactly this is why I wanted to mention :)




So I wanted to mention that there is some work by the MMI WG [1] related to
the above topics including:
- MMI Architecture, esp. its application lifecycle events [2]
- EMMA data format [3] and its JSON serialisation

I know the MMI group and as you mentioned it is good to address their work in this group since they already have solution for some particular aspects relevant for the WebSignage group. 






I'm CCing this message to Debbie Dahl, the MMI WG Chair, and Helena and
Raj, the two most active MMI participants, so that they can provide some more
details on those points.

[1] https://www.w3.org/2013/10/mmi-charter.html
[2] https://www.w3.org/TR/mmi-arch/#LifeCycleEvents
[3] https://www.w3.org/TR/emma20/

------------------------
2. Comments for Kiyoshi
------------------------

2.1 Section 3.1
----------------

Given the possible relationship with the MMI WG as mentioned above,
I'd suggest we add the MMI WG to the section "3.1 Liaisons" of the draft
Charter.

2.2 Section 3.2
----------------

Regarding the section "3.2 External Groups", I have the following two
comments:

1. "ITU-T Q14/16" should be "ITU-T SG16 Q14/16".

2. There are only ITU-T and DSC listed as related external groups, but
    maybe there should be some more related SDOs (esp. international
    ones), shouldn't there?

Thanks,

Kazuyuki

 

 

On Wed, Feb 3, 2016 at 8:12 PM, Bassbouss, Louay <louay.bassbouss@fokus.fraunhofer.de> wrote:

Hi Kiyoshi,

> On 03 Feb 2016, at 11:06, Kiyoshi Tanaka (田中 清) <tanaka.kiyoshi@lab.ntt.co.jp> wrote:
>
> Dear Louay,
>
>> I meant the Lifecycle of the Terminal User Agent (Or more precise the privilege service running in the background). For example what are the possible states of the User Agent like “Turned Off”, “Rebooting”, “Connected”, “Disconnected”, “Reconnecting”, etc. What happens when a user Agent is rebooted: it shows the last page before rebooting or a default page, etc. I am not sure if this is important for the draft. What do you think?
>
> I understand. Your "lifecycle" seems to be one of contextual information. Whether UA needs such complicated operation might depend on the service that the signage provides. And, such discussion would be done in a use-case study.
> So, regarding the draft charter, I think it is already included in "use-case study" word.
>
>> Yes I think it better to address this point in the charter and the decision about potential solutions will be done later in the WG.
>
> Thank you for your clarification!
>
>> Just another question: The group is also addressing interactive Digital Signage (like those in shopping malls with touch screen)?
>
> Do you have some special issue for interactive service of signage?
let´s take as example prompting dialogs again: In case of interactive services this may be allowed.
>
> Though I'm not sure whether there was clear discussion in BG before,
> I think the WG does not exclude the interactive issue.
> However, it is difficult for a new group to study many things, so the WG charter should include only basic deliverables at this moment, I think.
>
> Best regards,
> Kiyoshi
> ---
> Kiyoshi Tanaka, Ph.D.
>  NTT Service Evolution Laboratories
>  mailto:tanaka.kiyoshi@lab.ntt.co.jp

Thx
Louay

>
>
> On 2016/02/03 17:52, Bassbouss, Louay wrote:
>> Dear Kiyoshi, Futomi,
>>
>> Please find my comments inline.
>>
>> Thx,
>> Louay
>>> On 03 Feb 2016, at 04:00, Kiyoshi Tanaka (田中 清) <tanaka.kiyoshi@lab.ntt.co.jp> wrote:
>>>
>>> Dear Louay,
>>>
>>> Thank you for your explanation.
>>>
>>>> Exactly this is what I meant. It is good to have a common data format
>>>> and vocabularies for interaction between Terminal and Management
>>>> backend. JSON (or JSON-RPC) is a good candidate for data format.
>>>> Vocabulary needs to be defined on top (Name of events, actions,
>>>> properties, values, etc.). I think also lifecycle is important to discuss.
>>>
>>> As I said, this would be a good discussion point.
>>> I'll update the draft charter including the data format issue.
>>>
>>> For the clarification, could you please give us the explanation of "lifecycle"? (as Futomi requested)
>> I meant the Lifecycle of the Terminal User Agent (Or more precise the privilege service running in the background). For example what are the possible states of the User Agent like “Turned Off”, “Rebooting”, “Connected”, “Disconnected”, “Reconnecting”, etc. What happens when a user Agent is rebooted: it shows the last page before rebooting or a default page, etc. I am not sure if this is important for the draft. What do you think?
>>>
>>>>>>> Remote Prompting
>>> [...]
>>>> From my point of view, I don’t think that these prompting functions are
>>>> needed. The idea of Christian is to address these functions in the spec
>>>> to make clear what happens when a page uses one of these functions. For
>>>> example the spec could state that calling these functions will silently
>>>> fail without showing any dialog. Even if prompting the dialog on other
>>>> systems (like Operator backend) is possible, technically will raise some
>>>> issues because all prompting functions are synchronous.
>>>
>>> OK. I think this function is still interesting but not mature enough.
>>> So, I want to leave it for further discussion. Is it alright? > all
>> Yes I think it better to address this point in the charter and the decision about potential solutions will be done later in the WG.
>>>
>>> If you have any additional/new comment, please tell me!
>> Just another question: The group is also addressing interactive Digital Signage (like those in shopping malls with touch screen)?
>>>
>>> Best regards,
>>> Kiyoshi
>>> ---
>>> Kiyoshi Tanaka, Ph.D.
>>>  NTT Service Evolution Laboratories
>>>  mailto:tanaka.kiyoshi@lab.ntt.co.jp
>>>
>>>
>>> On 2016/02/02 20:08, Bassbouss, Louay wrote:
>>>> Hi Futomi, Kiyoshi,
>>>>
>>>> Please find my comments inline.
>>>>
>>>> Thx,
>>>> Louay
>>>>> On 02 Feb 2016, at 07:37, Futomi Hatano <futomi.hatano@newphoria.co.jp
>>>>> <mailto:futomi.hatano@newphoria.co.jp>> wrote:
>>>>>
>>>>> On Tue, 2 Feb 2016 14:48:19 +0900
>>>>> Kiyoshi Tanaka (田中 清) <tanaka.kiyoshi@lab.ntt.co.jp
>>>>> <mailto:tanaka.kiyoshi@lab.ntt.co.jp>> wrote:
>>>>>
>>>>>> Dear Louay, Christian,
>>>>>>
>>>>>> Thank you for your check and comment!
>>>>>> I'll check your revisions and reflect them in the draft charter.
>>>>>>
>>>>>>> Another question from our side which is not mentioned in the document is
>>>>>>> about protocols: Do you think we need a signalling or control protocol
>>>>>>> (can be on top of existing communication protocols like WS or WebRTC) to
>>>>>>> control the User Agent on the digital signage from a Management Backend.
>>>>>>
>>>>>> I guess there are some implementations using WebSocket.
>>>>>> However, the protocol on the WS is assumed different on each system.
>>>>>> So, I think it would be a good point to be discussed in the WG.
>>>>>> # How do you think? > BG Members
>>>>>
>>>>> +1
>>>>> For now, JS Players are very much tied to CMS.
>>>>> If the protocol (to be precise, it's data format) is standardized,
>>>>> JS Players can be developped independently.
>>>>>
>>>>> JSON-RPC can be a major candidate, though I don't support it strongly.
>>>>> We need only define vocabulary.
>>>> Exactly this is what I meant. It is good to have a common data format
>>>> and vocabularies for interaction between Terminal and Management
>>>> backend. JSON (or JSON-RPC) is a good candidate for data format.
>>>> Vocabulary needs to be defined on top (Name of events, actions,
>>>> properties, values, etc.). I think also lifecycle is important to discuss.
>>>>
>>>>> Futomi
>>>>>
>>>>>
>>>>>> ---
>>>>>> Followings are my replies for your comments picked up from the draft.
>>>>>>
>>>>>>> I've tried to make the roles clearer and more consistent thoughout
>>>>>>> the document. Originally there were users, owners and operators, with
>>>>>>> users being application users as well as users near the controlled
>>>>>>> device. So I reduced that to operators and viewers for simplicity and
>>>>>>> clarity.
>>>>>>
>>>>>> Thank you for your clarification! It sounds good by simplifying.
>>>>>> Moreover, it might be better to change "viewer" to "audience".
>>>>>>
>>>>>>> Shutting down the OS or turning off the device opens up a whole can
>>>>>>> of worms regarding how to get the device started again. We better
>>>>>>> just assume that we will just reboort things and that they will be
>>>>>>> running again automatically a bit later.
>>>>>>
>>>>>> Good clarification for the power management function!
>>>>>>
>>>>>> After shutting down, the system could boot up with the other trigger,
>>>>>> e.g. timer, WoL (which needs another device :-). These are only examples
>>>>>> of the system configuration. Since we must meet the requirement
>>>>>> for maintenance, the items to be defined are expected to be discussed
>>>>>> in the WG.
>>>>>>
>>>>>>> I'm not sure about the information flow here and whether the device
>>>>>>> should get the time or an external function should set it. I think it
>>>>>>> more likely that it will be set by an external application. But both
>>>>>>> cases would make sense. (After a reboot, the signage device will know
>>>>>>> that it might need to get the current time, so a 'get' would be
>>>>>>> useful. But in other cases, for example for daylight saving time, it
>>>>>>> makes more sense for an external device to set the time instead of
>>>>>>> the signage device repeatedly checking the time.)
>>>>>>
>>>>>> Boot-up time and time-shifting time are typical good examples.
>>>>>>
>>>>>> The original idea of clock management function is for getting
>>>>>> a precise time.
>>>>>> As you know, most systems can adjust their clock using NTP
>>>>>> or the other mechanism. However, the interval of adjustment
>>>>>> depends on OS. I heard that as some OS could perform it once a day,
>>>>>> time of such OS is likely to slip largely.
>>>>>> So, it will be useful if there is a function which enables to get
>>>>>> a precise time even in such a situation.
>>>>>>
>>>>>>> Remote Prompting
>>>>>>
>>>>>>> This can probably already be covered by APIs from other WGs, but it
>>>>>>> should at least be mentioned here as one of the things that the WG
>>>>>>> needs to specify (at least by reference) to cover the needs of the
>>>>>>> operator, mentioned in one of the earlier bullet points.
>>>>>>
>>>>>> It might be an interesting idea.
>>>>>> Could you tell me whether it has been discussed in the other group
>>>>>> such as WebScreen, if you know?
>>>>>>
>>>>>> In the signage case, when the signage server provides its contents,
>>>>>> there is no operator, because a setting is done prior.
>>>>>> So, if the prompt is shown on the remote console,
>>>>>> there might be nobody to operate.
>>>>>> Thus, this function is not enough for the signage case and it should
>>>>>> be considered including a no prompt case.
>>>>>>
>>>>>> Could you contribute more for this function?
>>>> From my point of view, I don’t think that these prompting functions are
>>>> needed. The idea of Christian is to address these functions in the spec
>>>> to make clear what happens when a page uses one of these functions. For
>>>> example the spec could state that calling these functions will silently
>>>> fail without showing any dialog. Even if prompting the dialog on other
>>>> systems (like Operator backend) is possible, technically will raise some
>>>> issues because all prompting functions are synchronous.
>>>>>>
>>>>>> More comments are welcomed!
>>>>>>
>>>>>> Thank you in advance!
>>>>>>
>>>>>> Best regards,
>>>>>> Kiyoshi
>>>>>> ---
>>>>>> Kiyoshi Tanaka, Ph.D.
>>>>>>  NTT Service Evolution Laboratories
>>>>>> mailto:tanaka.kiyoshi@lab.ntt.co.jp
>>>>>>
>>>>>>
>>>>>> On 2016/02/01 21:55, Bassbouss, Louay wrote:
>>>>>>> Dear Kiyoshi,
>>>>>>>
>>>>>>> Thank you for making progress on the charter. We (Group of Future
>>>>>>> Applications and Media at Fraunhofer FOKUS
>>>>>>> <https://www.fokus.fraunhofer.de/fame>) revised the charter and made
>>>>>>> comments (see comments of my Colleague Christian Fuhrhop in the Google
>>>>>>> document). Please let us know if you have questions.
>>>>>>> Another question from our side which is not mentioned in the document is
>>>>>>> about protocols: Do you think we need a signalling or control protocol
>>>>>>> (can be on top of existing communication protocols like WS or WebRTC) to
>>>>>>> control the User Agent on the digital signage from a Management Backend.
>>>>>>> Imagine the Operator wants to manage digital signage terminals running
>>>>>>> different User Agents using the same management backend. We think that
>>>>>>> signalling information exchanged between the different entities need to
>>>>>>> be specified somewhere e.g. IETF.
>>>>>>> Please let us know if you have additional questions.
>>>>>>>
>>>>>>> Best regards,
>>>>>>> | Dipl.-Ing. Louay Bassbouss
>>>>>>> | Project Manager
>>>>>>> | Future Applications and Media
>>>>>>> |
>>>>>>> | Fraunhofer Institute for Open Communication Systems
>>>>>>> | Kaiserin-Augusta-Allee 31 | 10589 Berlin | Germany
>>>>>>> | Phone 49 30 - 3463 - 7275
>>>>>>> | louay.bassbouss@fokus.fraunhofer.de
>>>>>>> <mailto:louay.bassbouss@fokus.fraunhofer.de>
>>>>>>> <mailto:louay.bassbouss@fokus.fraunhofer.de>
>>>>>>> | www.fokus.fraunhofer.de <http://www.fokus.fraunhofer.de/>  <http://www.fokus.fraunhofer.de <http://www.fokus.fraunhofer.de/> >
>>>>>>> <http://www.fokus.fraunhofer.de <http://www.fokus.fraunhofer.de/> >
>>>>>>>
>>>>>>>
>>>>>>>> On 26 Jan 2016, at 09:53, Kiyoshi Tanaka (田中 清)
>>>>>>>> <tanaka.kiyoshi@lab.ntt.co.jp <mailto:tanaka.kiyoshi@lab.ntt.co.jp>
>>>>>>>> <mailto:tanaka.kiyoshi@lab.ntt.co.jp>>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Dear,
>>>>>>>>
>>>>>>>> I've continued the chartering work with supporters.
>>>>>>>>
>>>>>>>> I've discussed with some cooperative people, and revised the charter
>>>>>>>> draft of the proposed Web-based signage WG.
>>>>>>>>
>>>>>>>> You can find it with revision marks on http://bit.ly/1kKj0RU (same URL
>>>>>>>> as we used).
>>>>>>>>
>>>>>>>> Main revisions include
>>>>>>>> - Scope added the description of security model treated by this WG
>>>>>>>> - API name produced by this WG (Remote Management API)
>>>>>>>> - Deliverables arranged to 3 functions (Power Management, Clock
>>>>>>>> Management, and Other Contextual Information)
>>>>>>>> - Dependencies limited to the close WGs
>>>>>>>>
>>>>>>>> How do you think this draft charter?
>>>>>>>> I expect your feedback!
>>>>>>>>
>>>>>>>> I want to go to next step in order to establish the new WG, soon.
>>>>>>>>
>>>>>>>> You can also find the clean-up version from:
>>>>>>>> https://docs.google.com/document/d/11DZ4L1ZxFz6JNRWULph09ClwoiMFHiJiXQpG1gn1zTA/edit?usp=sharing
>>>>>>>>
>>>>>>>> If you need a MS-Word file, please contact me!
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Kiyoshi
>>>>>>>> ---
>>>>>>>> Kiyoshi Tanaka, Ph.D.
>>>>>>>> NTT Service Evolution Laboratories
>>>>>>>> mailto:tanaka.kiyoshi@lab.ntt.co.jp
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2015/12/18 22:59, Ryoichi "Roy" Kawada wrote:
>>>>>>>>> Hi Kiyoshi,
>>>>>>>>>
>>>>>>>>> Yes, we can list CG's as well.
>>>>>>>>> For example, Web & TV IG lists CG's in its charter.
>>>>>>>>> http://www.w3.org/2012/11/webTVIGcharter.html#coordination
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>> Roy
>>>>>>>>> Ryoichi Kawada, KDDI
>>>>>>>>>
>>>>>>>>> On 2015/12/18 15:20, Kiyoshi Tanaka (田中 清) wrote:
>>>>>>>>>> Kawada-san,
>>>>>>>>>>
>>>>>>>>>> Thank you for your comment.
>>>>>>>>>>
>>>>>>>>>> I want to take care of Multi-device timing CG,
>>>>>>>>>> but I'm not sure whether CG may be listed.
>>>>>>>>>> Could you check?
>>>>>>>>>>
>>>>>>>>>> Best regards,
>>>>>>>>>> Kiyoshi
>>>>>>>>>> ---
>>>>>>>>>> Kiyoshi Tanaka, Ph.D.
>>>>>>>>>>  NTT Service Evolution Laboratories
>>>>>>>>>>  mailto:tanaka.kiyoshi@lab.ntt.co.jp
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2015/12/18 14:02, Ryoichi "Roy" Kawada wrote:
>>>>>>>>>>> Hi Kiyoshi,
>>>>>>>>>>>
>>>>>>>>>>> Thank you for drafting the charter.
>>>>>>>>>>>
>>>>>>>>>>> As for 3.1 Liaisons, how about adding Multi-device Timing CG (we saw
>>>>>>>>>>> their demo in Sapporo) ?
>>>>>>>>>>> Or is this section supposed to contain only WG / BG /IG ? If that
>>>>>>>>>>> is the
>>>>>>>>>>> case, ignore my proposal above.
>>>>>>>>>>>
>>>>>>>>>>> Cheers,
>>>>>>>>>>> Roy
>>>>>>>>>>> Ryoichi Kawada, KDDI
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2015/12/17 20:02, Kiyoshi Tanaka (田中 清) wrote:
>>>>>>>>>>>> Dear,
>>>>>>>>>>>>
>>>>>>>>>>>> After the TPAC discussion, I revised the charter draft.
>>>>>>>>>>>> You can find it on http://bit.ly/1kKj0RU .
>>>>>>>>>>>>
>>>>>>>>>>>> Do we restart a discussion?
>>>>>>>>>>>> Your feedback will be helpful.
>>>>>>>>>>>>
>>>>>>>>>>>> Thank you in advance!
>>>>>>>>>>>>
>>>>>>>>>>>> Best regards,
>>>>>>>>>>>> Kiyoshi
>>>>>>>>>>>> ---
>>>>>>>>>>>> Kiyoshi Tanaka, Ph.D.
>>>>>>>>>>>>    NTT Service Evolution Laboratories
>>>>>>>>>>>>    mailto:tanaka.kiyoshi@lab.ntt.co.jp
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 2015/10/23 21:38, "Kiyoshi Tanaka (田中 清)" wrote:
>>>>>>>>>>>>> Dear,
>>>>>>>>>>>>>
>>>>>>>>>>>>>> We'd like to draft a charter before TPAC through this discussion
>>>>>>>>>>>>>> in order to proceed to establishment of WG in the f2f at TPAC.'
>>>>>>>>>>>>>
>>>>>>>>>>>>> As my colleague Fujimura-san said, we've drafted a charter
>>>>>>>>>>>>> document of proposed WG,
>>>>>>>>>>>>> so we want to share it. Please find attached!
>>>>>>>>>>>>> We want to ask you to review the draft charter in BG F2F.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Moreover, we plan a breakouts session.
>>>>>>>>>>>>> You can find the idea in SessionIdeas Wiki.
>>>>>>>>>>>>> We'd like to propose standardization ideas,
>>>>>>>>>>>>> and we expect to get feedback from a variety of participant
>>>>>>>>>>>>> in order to improve the draft.
>>>>>>>>>>>>> Please come the session and give us a comment!
>>>>>>>>>>>>>
>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>> Kiyoshi
>>>>>>>>>>>>> ---
>>>>>>>>>>>>> Kiyoshi Tanaka, Ph.D. @ NTT Service Evolution Laboratories
>>>>>>>>>>>>>    mailto:tanaka.kiyoshi@lab.ntt.co.jp
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> 株式会社ニューフォリア
>>>>> 取締役 最高技術責任者
>>>>> 羽田野 太巳 (はたの ふとみ)
>>>>> futomi.hatano@newphoria.co.jp <mailto:futomi.hatano@newphoria.co.jp>
>>>>> http://www.newphoria.co.jp/
>>>>
>>>
>>>
>>>
>>
>
>
>




-- 

Kaz Ashimura, W3C Staff Contact for Auto, WoT, TV, MMI and Geo

Tel: +81 3 3516 2504 <tel:%2B81%203%203516%202504> 

 

 




-- 

Kaz Ashimura, W3C Staff Contact for Auto, WoT, TV, MMI and Geo

Tel: +81 3 3516 2504

 

Received on Tuesday, 1 March 2016 21:06:03 UTC