- From: Phillip Hallam-Baker <hallam@gmail.com>
- Date: Wed, 25 Jul 2012 14:30:07 -0400
- To: Albert Lunde <atlunde@panix.com>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
On Wed, Jul 25, 2012 at 1:59 PM, Albert Lunde <atlunde@panix.com> wrote: >> HTTP does have a similar conflation, but nowhere near as severe. >> Content is mostly confined to the body and Routing is strictly >> confined to the Head. The parts that cross the line are >> Content-Encoding and Content-Type. Both of which are ignored in a Web >> Services context almost all the time. Yes, a Web Service could >> support multiple character encodings but I cannot see any case where I >> would want the service to use Content-Encoding to make the choice. > > > This seems like a reasonable position for web services, but HTTP is also a > transport for HTML in the context of web browsers, which when mixed with > JavaScript and dynamic HTML, have done a remarkable job of confusing content > with metadata, and declarative markup with Turning-complete languages. Hopefully TLS deals with those cases to the extent that it is possible to do so. > There must be some security attacks which involve corrupting the headers or > the request. Maybe HTTP/2.0 will have better framing to resist this. > > "HTTP Request Splitting" comes to mind, or maybe adding a > Content-Disposition header. > > You may be right, though that it's easier to apply some kinds of security > (signing or encryption) to a payload. I suspect it will fall out naturally from the MUX design since pretty much every MUX design ends up separating routing headers and content headers. Take a look at MIME for example. Worst case scenario would be some sort of encapsulation scheme: GET / HTTP/1.1 Date: Blah.. Proxy-Woxy: Content-Integrity: wkjfhawkjhqkwjerhg Content-Length:1029096 Content-Type: application/http Content-Length:1029048 <?xml 2.3; encoding=utf9> <.... This does look a little like PEP of course. -- Website: http://hallambaker.com/
Received on Wednesday, 25 July 2012 18:30:35 UTC