- From: Mccool, Michael <michael.mccool@intel.com>
- Date: Thu, 20 Jul 2017 07:44:57 +0000
- To: Benjamin Francis <bfrancis@mozilla.com>
- CC: "daisuke.ajitomi@toshiba.co.jp" <daisuke.ajitomi@toshiba.co.jp>, "Soumya Kanti Datta" <Soumya-Kanti.Datta@eurecom.fr>, "Reshetova, Elena" <elena.reshetova@intel.com>, public-wot-ig <public-wot-ig@w3.org>, "public-wot-wg@w3.org" <public-wot-wg@w3.org>
- Message-ID: <180A43C6-F27B-4515-800F-68F04D306325@intel.com>
Also a link to info on DNS-SD. The basic idea is to insert an extra record in the info returned by DNS to advertise services, such as a local registry. Since you can run a local DNS server, and point to THAT with your DHCP server, you're off. My understanding is you can set up DNS-SD on existing systems without breaking anything, too (someone correct me if I'm wrong...). http://www.dns-sd.org/ You might still need a local hub though, in the short term, and a separate hub might require some semi-annoying setup by the user (eg pointing their router's DHCP server at an alternative DNS server in the hub, or turning off DHCP in their router so a preconfigured one in a separate hub can be used, features that not all routers may support and not all users are willing to do). In theory and in the slightly longer run local DNS-SD could be added to routers/IoT hub combos, and/or a separate IoT-only WiFi network could be used. This helps with rendezvous but not trust. But a common local registry rendezvous is at least a starting point. Now you just have to figure out how to trust it. Michael On Jul 20, 2017, at 12:54, Mccool, Michael <michael.mccool@intel.com<mailto:michael.mccool@intel.com>> wrote: Daisuke, Ben, The Distributed Internet Infrastructure meeting at IETF was interesting but a bit of a disappointment in some ways. The good news is it looks like a group will be formed that can be a source of centralized recommendations and standards to solve these problems. The bad news is it may take time to focus them on solving the real problems right in front of us... interesting presentations on blockchain and named data networks (specific technology solutions), not enough IMO discussion on what the *problems* actually are. The way I see it the main issues are holes in DNS and TLS certificate management that make it hard to use locally networked secure systems when the rest of the internet is (temporarily) unavailable. I think plugging these issues to at least handle temporary outages gracefully should be the first priority. I also think the priority is that systems continue working during normal operation (eg after setup) during an outage. I think it is much lower priority that setup itself work during an outage. As a user I want to be able to turns my lights on and off during an outage, but adding new devices or changing system configuration can wait. By the way, when I commented about mDNS I was just thinking about it as a bootstrap discovery mechanism to find a local registry etc. One other technology mentioned I need to look into is DNS-SD. Michael On Jul 19, 2017, at 21:01, Benjamin Francis <bfrancis@mozilla.com<mailto:bfrancis@mozilla.com>> wrote: Hi Daisuke, On 16 July 2017 at 06:23, <daisuke.ajitomi@toshiba.co.jp<mailto:daisuke.ajitomi@toshiba.co.jp>> wrote: Great summary for the issue and solutions. It is very interesting to me. In my opinion, it is not just an offline issue and it includes a big privacy problem of whether globally accessible domain names can be issued to personal-use devices or not. In your solution, getting DV certs and using HTTPS to the gateways, the users have to disclose their ip addresses and domain names globally and open ports to the global internet I don't think that giving globally accessible domain names to consumer devices is in itself a privacy problem. Many devices already have publicly resolvable addresses, open ports or tunnel through firewalls, and most users disclose their IP address every time they visit a website. What is important is getting authentication, authorisation and encryption right so that those devices can not be accessed by unauthorised users and data can not be intercepted. even though there are alternative solutions (e.g. cloud-hosted web-based remote control service that is well-managed by service admins). The danger with these cloud based services is that they risk centralisation and lock-in for users and we've already seen examples of businesses shutting down cloud services and bricking consumer devices as a result. There is certainly a place for these managed services, but the architecture of the Web of Things should not fundamentally depend on a central point of control, it must be decentralised at least to the extent that the web is today. In particular, considering industrial use cases, I don't know the approach can be acceptable or not. Industrial use cases certainly have different characteristics to consumer use cases. My colleagues and I have had a similar problem and launched a Community Group named "HTTPS in local network CG" this year. We have still just started discussions about use cases and requirements. I'd appreciate it if you check it out. https://www.w3.org/community/httpslocal/ https://github.com/httpslocal/usecases (draft) https://httpslocal.github.io/cg-charter/ (draft) In addition, in the last TPAC, we held a breakout session for this topic. https://www.w3.org/wiki/TPAC2016/session-https-local-summary The following slide includes my early-stage idea as one of the potential solutions. https://www.w3.org/wiki/images/3/37/2016.w3c.breakout_session.dot-local-server-cert.p.pdf This is all very interesting, thank you! Ben
Received on Thursday, 20 July 2017 07:45:29 UTC