W3C home > Mailing lists > Public > public-wot-ig@w3.org > July 2017

Re: Notes on W3C WoT Security Use Cases

From: Mccool, Michael <michael.mccool@intel.com>
Date: Thu, 20 Jul 2017 07:08:36 +0000
To: Benjamin Francis <bfrancis@mozilla.com>
CC: "daisuke.ajitomi@toshiba.co.jp" <daisuke.ajitomi@toshiba.co.jp>, "Soumya Kanti Datta" <Soumya-Kanti.Datta@eurecom.fr>, "Reshetova, Elena" <elena.reshetova@intel.com>, public-wot-ig <public-wot-ig@w3.org>, "public-wot-wg@w3.org" <public-wot-wg@w3.org>
Message-ID: <8A11AE6A-0BC4-485D-9951-440384699DCA@intel.com>
I'm not being entirely fair to NDN.  They are researching a longer-term solution which is great, and also point out the two key primitives that need to be implemented to support local networking: trust and rendezvous (aka local data syncing/discovery).   See "Local Trust Management and Rendezvous in Named Data Networking of Things".

But... what I'd like to figure out is how to recommend accomplishing these things without a whole new network infrastructure.  I would even be happy with a subset (eg maybe configuration or setup requiring the cloud, but not normal operation).   I am specifically concerned with making it *possible* for secure IoT systems to run independent of external network infrastructure.   Of course if a particular system depends on a cloud service for other things (eg voice recog) that's not possible.

The first step is to list the specific things that break and why.   It may be possible to address certain specific problems for short outages with some tweaks to existing systems, eg more flexible certificate lifetimes, extended DNS caching, etc.

Daisuke, it looks like your work is specifically for HTTPS/CORS, correct?

Michael McCool

On Jul 20, 2017, at 12:52, Mccool, Michael <michael.mccool@intel.com<mailto:michael.mccool@intel.com>> wrote:

Daisuke, Ben,

The Distributed Internet Infrastructure meeting at IETF was interesting but a bit of a disappointment in some ways.  The good news is it looks like a group will be formed that can be a source of centralized recommendations and standards to solve these problems.  The bad news is it may take time to focus them on solving the real problems right in front of us... interesting presentations on blockchain and named data networks (specific technology solutions), not enough IMO discussion on what the *problems* actually are.

The way I see it the main issues are holes in DNS and TLS certificate management that make it hard to use locally networked secure systems when the rest of the internet is (temporarily) unavailable.  I think plugging these issues to at least handle temporary outages gracefully should be the first priority.  I also think the priority is that systems continue working during normal operation (eg after setup) during an outage.  I think it is much lower priority that setup itself work during an outage.  As a user I want to be able to turns my lights on and off during an outage, but adding new devices or changing system configuration can wait.

By the way, when I commented about mDNS I was just thinking about it as a bootstrap discovery mechanism to find a local registry etc.  One other technology mentioned I need to look into is DNS-SD.

Michael

On Jul 19, 2017, at 21:01, Benjamin Francis <bfrancis@mozilla.com<mailto:bfrancis@mozilla.com>> wrote:

Hi Daisuke,

On 16 July 2017 at 06:23, <daisuke.ajitomi@toshiba.co.jp<mailto:daisuke.ajitomi@toshiba.co.jp>> wrote:
Great summary for the issue and solutions. It is very interesting to me.
In my opinion, it is not just an offline issue and it includes a big privacy problem of whether globally accessible domain names can be issued to personal-use devices or not.
In your solution, getting DV certs and using HTTPS to the gateways, the users have to disclose their ip addresses and domain names globally and open ports to the global internet

I don't think that giving globally accessible domain names to consumer devices is in itself a privacy problem. Many devices already have publicly resolvable addresses, open ports or tunnel through firewalls, and most users disclose their IP address every time they visit a website. What is important is getting authentication, authorisation and encryption right so that those devices can not be accessed by unauthorised users and data can not be intercepted.

even though there are alternative solutions (e.g. cloud-hosted web-based remote control service that is well-managed by service admins).

The danger with these cloud based services is that they risk centralisation and lock-in for users and we've already seen examples of businesses shutting down cloud services and bricking consumer devices as a result. There is certainly a place for these managed services, but the architecture of the Web of Things should not fundamentally depend on a central point of control, it must be decentralised at least to the extent that the web is today.

In particular, considering industrial use cases, I don't know the approach can be acceptable or not.

Industrial use cases certainly have different characteristics to consumer use cases.


My colleagues and I have had a similar problem and launched a Community Group named "HTTPS in local network CG" this year.
We have still just started discussions about use cases and requirements.
I'd appreciate it if you check it out.
https://www.w3.org/community/httpslocal/
https://github.com/httpslocal/usecases (draft)
https://httpslocal.github.io/cg-charter/ (draft)

In addition, in the last TPAC, we held a breakout session for this topic.
https://www.w3.org/wiki/TPAC2016/session-https-local-summary

The following slide includes my early-stage idea as one of the potential solutions.
https://www.w3.org/wiki/images/3/37/2016.w3c.breakout_session.dot-local-server-cert.p.pdf

This is all very interesting, thank you!

Ben
Received on Thursday, 20 July 2017 07:09:10 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 20 July 2017 07:09:11 UTC