- From: David Sheets <kosmo.zb@gmail.com>
- Date: Thu, 15 Jan 2015 17:00:20 +0000
- To: Henri Sivonen <hsivonen@hsivonen.fi>
- Cc: Noah Mendelsohn <nrm@arcanedomain.com>, Tim Berners-Lee <timbl@w3.org>, Public TAG List <www-tag@w3.org>
On Thu, Jan 15, 2015 at 4:00 PM, Henri Sivonen <hsivonen@hsivonen.fi> wrote: > On Fri, Jan 9, 2015 at 7:37 PM, David Sheets <kosmo.zb@gmail.com> wrote: >> Pervasive low-bandwidth and power/CPU constrained edge networks are >> going to become very common. > > The case where a caching proxy helps in theory is when the uplink is > constrained compared to the edge network from the proxy to the end > point. If the edge network is itself slow, the case for proxy caches > is weak even on theoretical grounds. If each hop has low QoS (e.g. with high drop rates or congestion), the link slowness compounds. I believe this makes the case for local proxy caches stronger rather than weaker. >> Smarter hub nodes with >> minimal/intermittent uplink could profitably serve signed/hashed >> resources in a proxy context > > Why would these "things" all be requesting the same large resources? > (Surely the "things" aren't all requesting currently-popular movies on > the same edge network.) Things might want to load: - new OS images - new firmware images - new SDR modules - new geographic data - new thing-to-thing protocol software - aggregated sensor data from other Things These mostly need to be verified/signed but not transport encrypted (for local hops). Something like sensor data might need to encrypt and sign the payload but not the whole transaction as the payload can be shared. >> for use cases where confidentiality is >> not necessary and direct HTTPS authority is too heavy. > > What are these use cases? Isn't the expectation that the "things" on > the Internet of Things will be even closer to people and, therefore, > be even more privacy-sensitive than what we have now? Some of their traffic will be very privacy sensitive. This traffic may flow over different links (e.g. opportunistic bluetooth) or use different protocols (tunnels built from preshared keys to hub devices). Some of their traffic will not in itself be privacy sensitive but metadata about its flows will be privacy sensitive. For instance, I do not want my sensors communicating directly with a third-party Internet service because: 1. Listeners to my uplink will know how many/what type of sensors I have and when they poll. 2. The third-party service can know precisely which sensors are contacting it from my connection. 3. Sensors may report my personal data behind my back. I desire to be in full control of and own my devices and I should not be required to permit encrypted traffic between vendor A's device and vendor A to achieve functionality. I may prefer to get my IoT updates over an anonymization network like Tor but I do not want each of my sensors running Tor (nor do I think most vendors or projects will nor should design for Tor from the sensor). In this case, it makes sense to aggregate and/or tunnel update requests from a hub. Finally, management of CA lists on sensors (required for direct HTTPS authority) is much more complicated than a preshared secret or other lightweight payload-based cryptographic protocol. >> Is the Web going to be part of the "Internet of Things"? > > I think debating that question requires agreement on what the Web is. > See https://www.mnot.net/blog/2014/12/04/what_is_the_web > > If you assume the proxy to be near the "thing" in the Internet of > things, it implies the "thing" would be a client--i.e. a Web browser. > The W3C has already been through an era when it was claimed that > limited browsers on underpowered devices were important. Writing specs > with that assumption turned out to be a mistake: The Web really took > off on mobile once the devices became powerful enough to run the kind > of browser engine desktop browser also use. I guess we should let the various software package management tools using HTTP(S) know that they aren't part of the Web. > As for the "thing" in the Internet of things being a Web server, > there's less relevance to proxies on the edge network where the > "thing" resides. Hop-by-hop caches aren't far-fetched. This is all to say that there is not a one-size-fits-all solution for the various demands put on devices that use Web technologies like URLs, links, and HTTP. I agree that TLS is a very good idea for most personalized, popular consumer web services of today. It's not at all clear to me that requiring the very complicated (and evolving) TLS stack in all end-user devices wishing to resolve web addresses is a good idea. If we move to a Web where every address begins "https", many transactions will be simultaneously too "secure" at the transport layer and too insecure at the application layer. The transport security will be more complicated and more defensive than desirable. To construct useful, secure, and privacy-preserving systems in this regime will then require undermining the transport security or producing only centralized (maybe only locally centralized...) designs, or both. In many cases, additional cryptographic protocols should be used but may be neglected because "we already use TLS". I favor design approaches that produce building blocks such that individual applications can be tailored to suit their specific constraints. A design approach advocating a system which forces applications to adopt large and changing components to participate at all is a flawed design approach, in my humble opinion. I don't know what this means for the current debate and I mostly don't care what is published. I'm simply trying to put some (potentially) underrepresented ideas into this forum. Cheers, David Sheets
Received on Thursday, 15 January 2015 17:00:49 UTC