Re: Attack research on HTTP/2 implementations

Needs more meta. As XML is to XHTML. I think it's time for a generic language for HTTP-ish protocols. 



I'm using XProc 3.0 to describe and configure Montage servers (my protocol i.e. MONT, with Diameter support), and define functionality of methods in terms of file permissions and response status codes. What I need is a vocabulary in there to define my method semantics, and their bindings to various transport protocols, specifically SCTP, without the result having to be HTTP/4 i.e. new transport layer for every major revision of an application protocol? Sorry, makes no sense to me.



This vocabulary's spec would strip down section 8.6 to just an explanation of why it's good for clients and servers to know message lengths explicitly, explain the security risks of making CL and TE "just work" over TCP/UDP, then suggest that other transport-layer protocols might be better suited to HTTP (and isn't that just what /2 and /3 attempt). Would define bindings to TCP, UDP, DCCP, SCTP or whatnot. Depends on whether you want to muddle layer boundaries to "solve" the HoL problem, or choose a transport protocol that doesn't have that problem to start with.



Instead of "All general-purpose servers MUST support the methods GET and HEAD" I'd say "MUST support methods for dereferencing URIs with or without content" or somesuch. MONT has no method matching the semantics of HTTP DELETE, ultimate removal of a resource is the job of a housekeeping bot or sysadmin, to reset 404 status. DElete, ReMove, INhibit, or UNlink could result in 410, 451, 401-3. So instead of defining DELETE, I'd maybe define generic removal semantics as changing resource status from 200 to 4xx when subsequently dereferenced. Why should my origin server obliterate all representations of a resource on DELETE? Plenty of use cases where I may want to REstore, RePlace, LiNk, or ENable the resource *without* making it a sysadmin restore-from-backup task.



Compressed HTTPS connections being a security risk is one of those "wow, why didn't I see that coming 20 years ago" things. Probably because I didn't buy into the 7-layer OSI network model, then. But now, I see how the layering violations in HTTP inevitably turn into exploits. Implementing various features of HTTP 1-3 at the transport layer, like by using SCTP, should dramatically reduce the "attack surface area" of the origin server, while eliminating many well-known attacks outright.



So, I think there's a way to more generically define HTTP (as opposed to reconciling nuances between 1-3), while blessing the use of SCTP and custom methods. In a way, I'm advocating security-through-obscurity over having to use HTTP/2 or /3 every step of the way from user-agent to origin server to avoid desync/downgrade/smuggling exploits. Don't get me wrong, I'm also highly praising the state of HTTP these days, in that I think any competing protocol *anyone* can design, will be very HTTP-ish. Just maybe without the layering violations. ;)



-Eric


---- On Fri, 06 Aug 2021 01:22:01 -0700 Willy Tarreau <w@1wt.eu> wrote ----


On Fri, Aug 06, 2021 at 12:04:50AM -0700, Eric J Bowman wrote: 
> To be expected (IMO) when major version numbers are tightly bound to a lower 
> layer of the stack (/1 = TCP, /2 = SPDY, /3 = QUIC) when what I'd like to see 
> is an HTTP spec (RFC) with generic language applicable to any underlying 
> stack, otherwise what, /4 = SCTP anyone? Every time this happens, new 
> assumptions are bound to be inherited, so to speak. 
 
You mean like this maybe ? 
 
 https://github.com/httpwg/http-core 
 https://github.com/httpwg/http2-spec 
 
Willy

Received on Monday, 9 August 2021 09:52:26 UTC