- From: Amos Jeffries <squid3@treenet.co.nz>
- Date: Wed, 22 Aug 2012 15:15:31 +1200
- To: <ietf-http-wg@w3.org>
On 22.08.2012 11:24, James M Snell wrote: > On Tue, Aug 21, 2012 at 2:30 PM, Poul-Henning Kamp > <phk@phk.freebsd.dk>wrote: > >> In message <2A8028EE-0EEC-4E42-89C4-347C33F60B90@checkpoint.com>, >> Yoav >> Nir writ >> es: >> >> >A requirement for downgrade creates too many restrictions, even if >> we >> throw >> > SPDY away. The beginning of a 2.0 connection would have to look >> enough >> like >> >1.x so as to fool existing servers. >> >> Yes, and ? >> >> Sending: >> >> HEAD / HTTP/1.1 >> Upgrade: HTTP/2.0 >> >> as a preamble on a connection is not very expensive. >> >> > Sending is not a problem... but handling it on the server side could > be > depending on what work the server needs to do to generate the HEAD > response. As per the spec, "The metainformation contained in the HTTP > headers in response to a HEAD request SHOULD be identical to the > information sent in response to a GET request" .. calculating the > Entity > Tag, Last-Modified Timestamp, Content-Length and Content-Type headers > could > be rather more expensive than one may expect depending on what's on > the end > of that request. That would be why HEAD is a bad choice. We also have prefix request: OPTIONS * HTTP/1.1 Max-Forwards: 0 Upgrade: HTTP/2.0 Which can be replied to with: HTTP/1.1 101 Switching Protocols CRLF <HTTP/2.0 response frame with transport options/features available> or, HTTP/1.1 200 OK <frame with 1.1 options> Whether this is better or worse than just adding Upgrade: to the first GET depends on whether we do SPDY-style gzip compression on headers, or an adaptive dictionary style learning as per network-friendly. Amos
Received on Wednesday, 22 August 2012 03:16:06 UTC