- From: Phillip Hallam-Baker <hallam@gmail.com>
- Date: Mon, 16 Jul 2012 12:23:29 -0400
- To: Poul-Henning Kamp <phk@phk.freebsd.dk>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Also looking at the SDPY spec it looks like it is actually two specs: 1) A Transport layer spec defining a framing scheme 2) Extensions to HTTP Now the two proposals are tightly coupled in SPDY as they are inter-dependent. But a spec needs to be layered to support mix-n-match. In particular, folk who are doing real-time video feeds are going to be best served if they can have a HTTP that sits upon an API that is presenting two connections, a bilateral connection based on TCP for control and a multicast connection for the content. It is also a fair question to ask how TLS is going to fit into this scheme and whether we want to consider whether a future version of TLS and SPDY might eventually converge. That does not mean doing that right out of the gate but it is something we might want to work towards. An X.0 version of a spec has deployment concerns that are in many ways opposite to the usual concerns. The way I look at it is that you put out a 1.0 version of a spec, people add and extend based on that platform for a decade or so, trying to minimize the impact of each incremental change. Each extension is simple by itself but the cumulative effect is a large increase in complexity as the extensions interact in unexpected ways. A 2.0 version of HTTP would be an opportunity to scrape away some of the barnacles that have been added to the spec and to establish a new baseline expectation. For example, I think that HTTP/2.0 should use SRV records to establish the service connection (unless a port is explicitly specified). That is the type of change that has a large potential value if everyone does it but won't if support is spotty. On Mon, Jul 16, 2012 at 10:18 AM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > In message <CAMm+LwhkvtPOVV=hi2TcYQ9F46Pf459m4Pj309VkHg6NOOYqyg@mail.gmail.com> > , Phillip Hallam-Baker writes: > >>What I care about is not how long it takes to >>implement but if I can implement on restricted chips like embedded >>control systems. So code footprint is more important to me than >>time-to-implement. And I think it is a more objectively fair test. > > I think it is a good point, if we have any hope of getting rid of > HTTP/1.0, HTTP/2.0 must penetrate all the way down into access points, > home routers and other embedded consumer products. > > NIST has used similar criteria for AES and SHA3 beauty-contests. > > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. -- Website: http://hallambaker.com/
Received on Monday, 16 July 2012 16:24:01 UTC