- From: Kazuyuki Ashimura <ashimura@w3.org>
- Date: Tue, 11 Oct 2016 08:43:01 +0900
- To: "www-multimodal@w3.org" <www-multimodal@w3.org>
- Message-ID: <CAJ8iq9VWo_CVoc5G=vV3GdRiyRWq-JtLO7CqOw9XRTMy3PAHLw@mail.gmail.com>
Hi group,
Sorry for the delay but it seems I've not yet sent the f2f minutes
out to the group list yet.
The minutes from the f2f meeting during TPAC 2016 in Lisbon are
available at:
https://www.w3.org/2016/09/19-20-multimodal-minutes.html
also as text below.
Thanks a lot for taking notes, Debbie and Helena!
Kazuyuki
---
[1]W3C
[1] http://www.w3.org/
- DRAFT -
Multimodal Interaction Working Group F2F Meeting in Lisbon
19-20 Sep 2016
[2]group photo
[2] https://www.w3.org/public-photos/multimodal/DSC_0393.JPG
See also: IRC logs from [3]Day 1 and [4]Day 2
[3] http://www.w3.org/2016/09/19-multimodal-irc
[4] http://www.w3.org/2016/09/20-multimodal-irc
Attendees
Present
Debbie_Dahl(Invited_Expert),
Dan_Burnett(Invited_Expert),
Helena_Rodriguez(Invited_Expert),
Branimir_Angelov(Wacom;Guest), Kaz_Ashimura(W3C),
Sebastian_Kaebisch(Siemens), Ningxin_Hu(Intel),
Uday_Davuluru(RWE), Andrei_Ciortea(Universite_de_Lyon)
Regrets
Chair
Debbie
Scribe
Debbie, Helena, Kaz
Contents
* [5]Topics
1. [6]MMI Discovery
2. [7]Joint meeting with the COGA TF
3. [8]Joint meeting with the WoT IG
* [9]Summary of Action Items
* [10]Summary of Resolutions
__________________________________________________________
[11]photo
[11] https://www.w3.org/public-photos/multimodal/DSC_0190.JPG
MMI Discovery
<helena> [12]http://w3c.github.io/mmi-discovery/vocabulary.html
[12] http://w3c.github.io/mmi-discovery/vocabulary.html
<helena> editor's version:
<helena>
[13]https://github.com/w3c/mmi-discovery/blob/gh-pages/vocabula
ry.html
[13] https://github.com/w3c/mmi-discovery/blob/gh-pages/vocabulary.html
<ddahl> scribe: ddahl
(adding reference to example of "greeting service" in a "smart
environment" to vocabulary
debbie: this will be useful for cognitive accessibility
... also useful for web of things
... we need some text relating state management and vocabulary
<helena> [14]http://www.w3.org/TR/mmi-discovery/
[14] http://www.w3.org/TR/mmi-discovery/
<helena> The document explaining the process of discovery and
the needs on changing the architecture and the vocabulary
helena: we can add something from the original use cases note
debbie: first overview, then state management, then vocabulary
should be in the merged document
<helena>
[15]http://www.w3.org/TR/2016/WD-mmi-mc-discovery-20160411/
[15] http://www.w3.org/TR/2016/WD-mmi-mc-discovery-20160411/
debbie: vocabulary should all be grounded in standards if
possible
helena: we can add metadata in operations to say that the
component understands, for example, emotionML or BML (behavior
markup language)
the "input" and "output" sections under "behavior" list the MMI
architecture events used for sending input and output
scribe: as opposed to overall context management
... because as an MMI MC, we can assume those events are used
behavior markup language
[16]http://www.mindmakers.org/projects/bml-1-0/wiki#BML-10-Stan
dard
[16] http://www.mindmakers.org/projects/bml-1-0/wiki#BML-10-Standard
we should remove the key from the POST because authentication
is out of scope for MMI
<helena> [17]http://w3c.github.io/mmi-discovery/vocabulary.html
[17] http://w3c.github.io/mmi-discovery/vocabulary.html
updating [18]http://w3c.github.io/mmi-discovery/vocabulary.html
with discussion
[18] http://w3c.github.io/mmi-discovery/vocabulary.html
updating the 'state handling' wd to include the vocabulary
document
state handling [19]https://www.w3.org/TR/mmi-mc-discovery/
[19] https://www.w3.org/TR/mmi-mc-discovery/
<helena>
[20]http://w3c.github.io/mmi-discovery/Discovery_and_Registrati
on.html
[20] http://w3c.github.io/mmi-discovery/Discovery_and_Registration.html
(taking a break)
(back from break)
helena: we should change the order of the title to
"Registration and Discovery" because registration comes first
... vocabulary should come first, then state handling
... we can get some material from the use cases document
... it will be hard to edit in real time, we should make a todo
list
... section 4 can be the same
... it will be the component used for registration
<helena> @todolist: Keep the 4 section that describes the
Resources Manager
<helena> --- put a 6 section with the vocabulary and rename the
following sections with the new numbers
<ddahl_> scribe:ddahl_
<helena>
[21]http://w3c.github.io/mmi-discovery/Discovery_and_Registrati
on.html
[21] http://w3c.github.io/mmi-discovery/Discovery_and_Registration.html
helena: leave short examples of vocabulary in place
... not all at the end
... we could have one running example of the face animation
throughout
... the face synthesizer won't need a lot of states, it will be
a very concrete example
... should state at the beginning that we'll talk about two
things, registration and then discovery/monitoring
... registration -- what does a component have to do to
describe itself. will just include a paragraph here and then
point to the use cases document
... this is very related to the Internet of Things
... we can leverage the IoT registriations like UPnP, but it
will be necessary to translate because they don't talk about
modalities
debbie: will still need information for UI's
helena: still can get this from different types of devices
... maybe WoT can provide an API to give some information like
the name of the service
... the address
debbie: how would that work for example, a rice cooker?
helena: visual modality is an LED, haptic modality (button) and
thermostat
... the description has to say that
... the only thing we can control is the button on/off
... a fancier rice cooker could have more controls
debbie: what if the rice cooker can be controlled by voice?
helena: this is a different service
... a rice cooker that can be controlled by an app would be
cognitive
... the operation would be the same but the modalities would be
different (haptic and cognitive)
debbie: you could turn on the rice cooker or adjust the
firmness of the rice
... those would be different operations
helena: you have risotto, basmatii, sushi rice -- they might be
different
debbie: it would good to talk about leveraging IoT descriptions
and translation
we can decide what we should say about WoT and
Discovery/Registration in our document after we meet with WoT
tomorrow
dan: also need a description of the API for data control
[22]photo
[22] https://www.w3.org/public-photos/multimodal/DSC_0194.JPG
[ Day 1 adjourned ]
__________________________________________________________
Day 2
<helena> question to kaz: in the table "functions of object" in
the template, what are Functions of Objects Track and
Accountability for?
[23]photo
[23] https://www.w3.org/public-photos/multimodal/DSC_0196.JPG
<ddahl> (updating use case document)
[24]photo [25]photo
[24] https://www.w3.org/public-photos/multimodal/DSC_0195.JPG
[25] https://www.w3.org/public-photos/multimodal/DSC_0197.JPG
<scribe> ACTION: kaz to clarify in the table "functions of
object" what Fuctions of Objects Track and Accountability are
for [recorded in
[26]http://www.w3.org/2016/09/20-multimodal-minutes.html#action
01]
[26] http://www.w3.org/2016/09/20-multimodal-minutes.html#action01]
<trackbot> Created ACTION-455 - Clarify in the table "functions
of object" what fuctions of objects track and accountability
are for [on Kazuyuki Ashimura - due 2016-09-27].
Joint meeting with the Cognitive Accessibility TF
(visited the Cognitive Accessibility TF's room for the joint
meeting)
-> [27]Cognitive Accessibility TF minutes
[27] https://www.w3.org/2016/09/20-coga-minutes.html
Joint meeting with the WoT IG
[28]Thing Description examples from the WoT Current Practice
document
[28]
http://w3c.github.io/wot/current-practices/wot-practices.html#td-examples
ddahl: explains what the MMI Architecture is like
... the Interaction Manager works with various Modality
Components like speech, emotion recognition, ink capture
... communicate with user for WoT
... there is another component named Resource Manager
... responsible to maintain the state of the resources
... manage their capabilities
... shows another diagram
... with a rice cooker
[29]photo
[29] https://www.w3.org/public-photos/multimodal/DSC_0193.JPG
helena: UPnP itself doesn't provide device capability
information
... we were thinking the WoT framework should provide that kind
of information
seb: explains Thing Description
... data types coming from RDF and schema
... but how to handle the range, etc.
... maybe there are some ways to rely on
... but not fixed yet
... if the type relies on XML, we can use its schema
... JSON Schema is not yet standardized, though
... we'll have discussion on Schema.org too
ddahl: how can a developer get the value like temperature?
seba: can access the entry point specified by the "uris"
property
... and get the value by the "hrefs"
... index.html is the entry point at the uris URL
ddahl: besides temp, we can use more than one properties?
helena: each property is atomic?
seba: we could allow multiple protocols
... and multiple hrefs
helena: question about the end point for multiple protocols
seba: properties are handled by GET
... actions are handled by POST
... there is already complains on the usage of "uris"
... maybe better to explicitly specify the method in addition
to protocols within "uris"
kaz: also it would be even clearer to use different endpoint
file names for different protocols instead of reuse
"index.html" for all possible protocols
seba: right
... discussion still ongoing
... this is JSON-LD notation based on RDF
... can show a bigger example
helena: do you have any taxonomy to describe things?
... about how "things" could be described in addition to
"devices"
... e.g., flower
(discussion on ontology)
seba: we're not working on ontology
... different kinds of ontologies could be used with the WoT
framework
helena: if I want to use some ontology with TD, where I can
specify that within TD?
seba: within the @context part
... and can use the prefix within the below @context
ddahl: unit change?
... how the manufacture could specify units?
(after some more joint discussion)
kaz: MMI should be a promising framework for the expected
advanced user interface for the Web of Things world.
... These days, the group has been working on new use cases for
that purpose.
[30]photo [31]photo
[30] https://www.w3.org/public-photos/multimodal/DSC_0198.JPG
[31] https://www.w3.org/public-photos/multimodal/DSC_0194.JPG
[ Meeting adjourned ]
Summary of Action Items
[NEW] ACTION: kaz to clarify in the table "functions of object"
what Fuctions of Objects Track and Accountability are for
[recorded in
[32]http://www.w3.org/2016/09/20-multimodal-minutes.html#action
01]
[32] http://www.w3.org/2016/09/20-multimodal-minutes.html#action01
Summary of Resolutions
[End of minutes]
__________________________________________________________
Minutes formatted by David Booth's [33]scribe.perl version
1.144 ([34]CVS log)
$Date: 2016/10/10 23:40:04 $
[33] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
[34] http://dev.w3.org/cvsweb/2002/scribe/
--
Kaz Ashimura, W3C Staff Contact for Auto, WoT, TV, MMI and Geo
Tel: +81 3 3516 2504
Received on Monday, 10 October 2016 23:44:16 UTC