[PlugFest/Test] 13 June 2018

available at:
  https://www.w3.org/2018/06/13-wot-pf-minutes.html

also as text below.

Thanks a lot for taking these minutes, Michael McCool!

Kazuyuki

---

   [1]W3C

      [1] http://www.w3.org/

                               - DRAFT -

                              WoT PlugFest

13 Jun 2018

Attendees

   Present
          Kaz_Ashimura, Federico_Sismondi, Matthias_Kovatsch,
          Ryuichi_Matsukura, Toru_Kawaguchi, Takeshi_Sano,
          Tomoaki_Mizuhsima, Kunihiko_Toumura, Michael_McCool,
          Michael_Lagally, Sebastian_Kaebisch, Michael_Koster,
          Ege_Korkan, Dave_Raggett

   Regrets

   Chair
          Matsukura, Koster, McCool

   Scribe
          McCool

Contents

     * [2]Topics
         1. [3]Agenda
         2. [4]F-interop, by Federico
         3. [5]PlugFest prep
     * [6]Summary of Action Items
     * [7]Summary of Resolutions
     __________________________________________________________

   <kaz> scribenick: McCool

Agenda

   two topics: plugfest prep and testing

   McCool: I suggest we do Fredrico's pres on testing first, so he
   can leave if he has to

F-interop, by Federico

   Federico: presentation, then a little demonstration on what is
   being done for CoAP
   ... WoT may be different, but it will give you an idea
   ... F-Interop is an H2020 project
   ... almost done, Oct 2018, but tools will be maintained
   ... main objectives is to develop and provide online testing
   tools
   ... including interoperabilty testing
   ... members into INRIA, ETSI, etc.
   ... currently, SoA is F2F events ("plugfests")
   ... but F2F plugfests have issues...
   ... short, if you hit a bug, can wreck entire event
   ... so F-interop wants to provide online and remote process
   ... so can do continuous testing
   ... system description
   ... runs in cloud, can connect both through Web GUI and to IUT
   (Item under test)
   ... can also have simulated IUTs to connect to running in cloud
   ... various tools: communication, including tunneling (bypass
   NATs and Firewalls); coordinating tests based on test
   descriptions; traffic sniffing; dissecting messages; traffic
   analysis
   ... results -> issue PASS/FAIL/INCONCLUSIVE based on test
   description
   ... there is a tutorial available online that you can look at
   ... initially looked at a couple of IoT standards: CoAP and
   6TiSCH
   ... started with ETSI plugfest test descriptions
   ... now looking at extensions: oneM2M, OMA LwM2M, 6LoWPAN,
   and... hopefully WoT
   ... Ex: CoAP test case (test spec)
   ... did not make these up, used ETSI test specs
   ... gives objectives, configuration, pretest condition, then
   test sequence
   ... in this case, test sequence starts with a stimulus, then a
   set of checks on responses
   ... so what are WoT test cases?
   ... could not find anything similar to CoAP
   ... would be nice to have things more formal

   Ege: looks a bit like invoke simple action
   ... but autogenerated from TD
   ... can generate payload from description
   ... can send payload and check response
   ... for example, can check that output satisfies schema

   Federico: that makes sense...
   ... but not predefining coverage, looking at capabilities of
   device, making sure those work

   Ege: test description basically is given by the TD

   Federico: nice approach, but at the end of the day doing the
   same kind of tests that were in the BJ plugfest
   ... but a more advanced automation

   McCool: time check, only 5-10m more

   Federico: ok, I'd like to take the time to do a demo
   ... can access for free

   [8]https://go.f-interop.eu

      [8] https://go.f-interop.eu/

   Federico: can do both single user and user-to-user tests
   ... or can use a reference-based test
   ... pick device, and test case
   ... there is also a playground to launch requests without a
   predefined test
   ... start it, deploys instance in Docker containers and runs
   tests
   ... need to set up environment, communication, etc.
   ... communication can include a virtual network
   ... including support for UDP
   ... based on very formal set of test cases
   ... at the end get a report

   scribenick: kaz

   McCool: we should clarify which part is opensource and which is
   not

   scribenick: McCool

   McCool: also, I think a small team (Ege, Federico, Dave) should
   go off and put together a small proof-of-concept using just the
   OSS F-Interop tools and then report back

   scribenick: kaz

   Kaz: agree we should clarify which is open and which is not.
   ... also we should think about levels of testing again, and
   consider which level/part is this work (F-Interop) should be
   applied
   ... regarding the smaller call, would suggest at least all the
   co-Chairs and the staff contacts should be included

   scribenick: McCool

   McCool: I was thinking a working meeting with just the
   implementors, but let's discuss via email

   Koster: we should get in the habit of publishing an agenda
   ... so people who are just interested in the pf, for instance,
   know when to join

PlugFest prep

   Koster: did prepare one slide as a discussion point

   <ryuichi>
   [9]https://github.com/w3c/wot/blob/master/plugfest/2018-bundang
   /preparation.md

      [9] https://github.com/w3c/wot/blob/master/plugfest/2018-bundang/preparation.md

   Matsukura: want to first explain preparation.md structure
   ... sections: first is intro, info from last pf
   ... second section is new information
   ... 2.1 is a table for participants to fill out
   ... please insert your information in this table
   ... 2.2 is checking points
   ... these should also be discussed in the results.md for your
   report-out
   ... can follow this template and carry information forward
   ... then 2.3 are other issues
   ... if any suggestions to improve please make them
   ... section 4 are logistical requirements: network, power, etc.
   ... I also made an example, fujitsu-preparation.md
   ... information is the same as last plugfest

   Koster: table is useful, people can just add a row
   ... checkpoints are also useful for results.md
   ... but not everyone used the same template
   ... or maybe it was
   ... but edited
   ... section 2.3, other issues... my material
   ... kind of a mixture of different things; experimental
   features, validation, binding templates, services, etc.
   ... this section will go away/be integrated

   McCool: I think we should prioritize, e.g. security should be
   mandatory
   ... also, "other" maybe should be broken into "extras" and
   "experimental features". Latter are things that are candidates
   for inclusion in the standard but need so experimentation

   Sebastian: I think we need fewer "demos" and more "plugging"
   ... we really need to check how things can work together
   ... that would really help to improve the standard
   ... we need to actually focus on testing interoperability and
   the standard
   ... would like to see more "client" implementations
   ... a lot of devices, not enough systems trying to use other
   devices

   Koster: agree. we also need more formal test plans and better
   record keeping
   ... agree also that we need more applications
   ... we are building a system, we need to have all the
   components present
   ... need so see more interactions

   Sebastian: rather than bringing a lot of things, we should
   focus on the content and interactions
   ... we should concentrate on the functionality

   Federico: table of features that implementations should
   implement
   ... at the end of the day, pulling tables to see what features
   are being tested

   Koster: when you have a set of test cases, it can stand in for
   a standard application
   ... but we also need some ad-hoc applications to test
   integration
   ... and use cases
   ... for example, using semantic search to find a bunch of
   sensors and analyse them for some purpose
   ... in other words, system-level applications
   ... first cover functional, then cover system
   ... functional: list features you want to test, and then
   generate tests against them
   ... then, assuming functional tests done, how do we orchestrate
   them to solve a higher-level problem
   ... we don't really know what "interoperability" is yet

   McCool: I think the second part is more like "what is it good
   for", eg. use case analysis
   ... functional testing is just "does the implementation match
   the spec"
   ... system testing is "does the spec have the right things"

   scribenick: kaz

   Kaz: was on the queue and wanted to mention something like
   McCool :)
   ... probably we should have a combined demo scenario on the
   main preparation.md, and clarify what kind of use
   cases/features are included in that. And then clarify details
   in the company-preparation.md and test/report if it works after
   the plugfest

   scribenick: McCool

   Koster: so maybe we need to think about to structure this in
   the preparation.md file
   ... would still like to do system-level scenario exploration
   ... but should think about how to structure it better

   Matsukura: adding a "use case" section might be useful
   ... added, please insert your idea

   Koster: my presentation... more of the same
   ... we are building a system, e.g. an application that
   orchestrates multiple things
   ... but also have components and roles

   Koster: to test components need to have them in context, eg. in
   a system (application)
   ... several components are not servients...
   ... or are they?
   ... Servients use the scripting API, interacts with Thing
   Directory
   ... if it only exposes a TD, e.g. is an endpoint device, then
   it's not a servient
   ... system architecture includes both local and remote parts
   ... I myself am going to bring fewer things but focus on
   applications
   ... but applications have a bunch of common components and
   patterns, e.g. local/remote directories, proxies, etc.
   ... also some new concepts, like "Bridge Servient"; can also
   think of as a "role"
   ... also notable is that proxy-proxy connection doesn't matter
   ... but need to go through the proxy to get to local devices
   from the cloud
   ... other way does not necessarily need the proxy
   ... want to list the patterns separately
   ... but would like to take an application-layer approach
   ... register things locally
   ... tell the proxy I want to make available
   ... can use the information in the TD to figure out how to
   implement
   ... missing an arrow here, some extra orchestration

   McCool: noticed that the directory and the proxy are paired
   ... and also notable is that TD may have to be modified in
   various ways before being registered in the could TDir
   ... could be the other way around: could register in the TDir,
   then proxy can pick it up

   Koster: another point: we want to think about applications and
   how to deploy them
   ... I am thinking about using node-wot to deploy applications
   ... what I suggest we start thinking about
   ... we need goals that are bigger than just turning a light on
   ... what are the higher-level problems we are trying to solve,
   and what are their requirements?
   ... for example, what are the security requirements

   Sebastian: don't think we need to spend so much time on the
   application
   ... I am thinking more that we just do more 1:1 testing
   ... and try to use all the interaction patterns
   ... we could go further and do mash-ups
   ... even that simple approach exposes many issues

   Koster: I was thinking about how to accomplish what you say...
   ... can create a control panel in a client that lets you
   connect to all the

   McCool: could be a simple tool that generates a web dashboard
   given a TD

   Kaz: can clarify use cases, brief, in preparation.md
   ... so people can understand the "story"

   Sebastian: if you recall in BJ
   ... we tried to identified what worked well and what didn't
   ... I am a little worried about our timeline
   ... we only have a couple more plugfests to get experience with
   TD in practice
   ... then we are supposed to be done
   ... not convinced we are testing things completely enough
   ... we need to at least do all the basic component tests

   scribenick: kaz

   Kaz: would agree with your concern. however, each use case
   doesn't have to be very fancy but can be something simple, and
   the main preparation use case scenario simply refer to them and
   clarify concrete demo scenarios. sometimes in parallel and
   sometimes sequentially using dashboard, etc.

   scribenick: McCool

   Koster: this is definitely a different level
   ... we also need the basic testing at the plugfest
   ... but... the low-level testing is obviously a higher priority
   ... although we also need to move forward with system testing
   too

   Toru: do we really need scripting API?

   McCool: my opinion is that the minimum is that it "consumes" a
   TD
   ... if it only exposes a TD it's an endpoint device

   Koster: I do want to motivate using the TDir as well
   ... but, yeah...

   <Zakim> kaz, you wanted to ask about the availability of
   Koster's slides :)

   Matsukura: our implementation doesn't use scripting API, so...
   but we need to better focus on "applications"
   ... our implementation doesn't use scripting API, though

   Kaz: can you make slides available?

   [10]Koster's slides

     [10] https://github.com/w3c/wot/blob/master/plugfest/2018-bundang/Plugfest-System-Arch-20180613.pdf

   Lagally: suggestion
   ... suggest to add a legend to the architecture slides, to be
   able to understand the meaning of the colours of the boxes

   Koster: sure

   Matsukura-san: AOB?

   ok, adjourn

Summary of Action Items

Summary of Resolutions

   [End of minutes]
     __________________________________________________________


    Minutes formatted by David Booth's [11]scribe.perl version
    1.152 ([12]CVS log)
    $Date: 2018/06/18 08:28:28 $

     [11] http://dev.w3.org/cvsweb/~checkout~/2002/scribe/scribedoc.htm
     [12] http://dev.w3.org/cvsweb/2002/scribe/

Received on Friday, 22 June 2018 02:27:16 UTC