- From: Willy Tarreau <w@1wt.eu>
- Date: Mon, 11 Jun 2012 11:16:36 +0200
- To: Roberto Peon <grmocg@gmail.com>
- Cc: ietf-http-wg@w3.org
Hi Roberto ! I was sure you would be the first to respond :-) On Sun, Jun 10, 2012 at 04:39:37PM -0700, Roberto Peon wrote: > > I'm already observing request compression ratios of 90-92% on various > > requests, including on a site with a huge page with large cookies and > > URIs ; 132 kB of requests were reduced to 10kB. In fact while the draft > > suggests use of multiple header contexts (connection, common and message), > > now I'm feeling like we don't need to store 3 contexts anymore, one single > > is enough if requests remain relative to previous one. > > > > For my deployment, I'm fairly certain this would not be all that common. > Two contexts may be enough 'connection' and 'common', but I think you had > it right the first time. Connection indeed has some uses, but we found that these are sometimes limited. Between a client and a server, the UA and connection information may be transmitted. Whether it's transmitted as a connection-specific header or as a normal header that is retained for all other messages doesn't make a difference. For a proxy, connection headers may be used to transmit Via and the Forwarded-For header. This last one goes away if connections are multiplexed between multiple clients. Concerning the merge oc common+message into message, I found in the traffic I analysed that a number of header fields are transmitted for a few requests in a row only. Initially I thought that sending a set of headers which are planned to be common for multiple consecutive requests was the way to do it. But after seeing the traces, I'm realizing that sending differences between consecutive requests achieves the same result with more flexibility and better resistance to frequent changes. Also, one of the difficulties for a proxy was to decide what to put into the common section. By only sending differences between requests, this problem doesn't exist anymore. > The more clients you have and are aggregating through to elsewhere, to more > advantageous that scheme becomes. Warning, for me there always was only one common section, since we can't make a server support an infinite number of contexts. > > - Accept: text/css,*/*;q=0.1 > > => this one changes depending on what object the browser requests, so it > > is less efficiently compressed : > > > > 1 Accept: > > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > 4 Accept: text/css,*/*;q=0.1 > > 8 Accept: */* > > 1 Accept: image/png,image/*;q=0.8,*/*;q=0.5 > > 2 Accept: */* > > 9 Accept: image/png,image/*;q=0.8,*/*;q=0.5 > > 2 Accept: > > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > 90 Accept: image/png,image/*;q=0.8,*/*;q=0.5 > > 1 Accept: */* > > 9 Accept: image/png,image/*;q=0.8,*/*;q=0.5 > > > > => With better request reordering, we could have this : > > > > 11 Accept: */* > > 109 Accept: image/png,image/*;q=0.8,*/*;q=0.5 > > 4 Accept: text/css,*/*;q=0.1 > > 3 Accept: > > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > > > Achieving this seems difficult? How would we get a reording to occur in a > reasonable manner? I don't think it's that difficult, but I'm not a browser developer and I'm sure they're facing a huge amount of complex issues. For instance, maybe it's not always possible to fetch all images at a time, or to fetch css first then images. I must say I don't know :-/ > > I'm already wondering if we have *that* many content-types and if we > > need > > to use long words such as "application" everywhere. > > We were quite wordy in the past :) Yes, indeed. > > - Cookie: xtvrn=$OaiJty$; xtan327981=c; xtant327981=c; has_js=c; > > __utma=KBjWnx24Q.7qFKqmB7v.i0JDH91L_R.0kU2W1uL49.JM4KtFLV0b.C; > > __utmc=Rae9ZgQHz; > > __utmz=NRSZOcCWV.d5MlK5RJsi.-.f.N8J73w=S1SLuT_j0m.O8|VsIxwE=(jHw58obb)|r9SgsT=WQfZe8jr|pFSZGH=/@/qwDyMw3I; > > __gads=td=ASP_D5ml4Ebevrej:R=pvxltafqZK:x=E4FUn3YiNldW3rhxzX6YlCptZp8zF-b5qc; > > _chartbeat2=oQvb8k_G9tduhauf.LqOukjnlaaE7K.uDBaR79E1WT4t.Kr9L_lIrOtruE8; > > __qca=LC9oiRpFSWShYlxUtD37GJ2k8AL; __utmb=vG8UMEjrz.Qf.At.pXD61lUeHZ; > > pm8196_1=c; pm8194_1=c > > > > => amazingly, this one compresses extremely well with the above scheme, > > because additions are performed at the end so consecutive cookies > > keep > > a lot in common, and changes are not too frequent. However, given the > > omnipresent usage of cookies, I was wondering why we should not > > create > > a new entity of its own for the cookies instead of abusing the Cookie > > header. It would make it a lot easier for both ends to find what they > > need. For instance, a load balancer just needs to find a server name > > in the thing above. What a waste of on-wire bits and of CPU cycles ! > > > > You're suggesting breaking the above into smaller, addressable bits? Yes possibly. I'm not completely sure yet because the overhead of "; =" is small. That said, we're seeing many hex-encoded or base64-encoded cookies everywhere, and such use cases would benefit from being length-delimited and support binary contents. > > Has anyone any opinion on the subject above ? Or ideas about other things > > that terribly clobber the upstream pipe and that should be fixed in 2.0 ? > > I like binary framing because it is significantly easier to get right and > works well when we're considering things other than just plain HTTP. > > Token-based parsing is quite annoying in comparison-- it either requires > significant implementation complexity to minimize memory. And it forces us to support border-line variants (eg: LF vs CRLF, case matching, support of empty names and spaces around names, etc...). And it requires the recipient to parse data it doesn't care about, just to find delimiters. > With length-based > framing, the implementation complexity is decreased arguably for everyone > and certainly in cases where you wish to be efficient with buffers. Exactly. And it's harder to get it wrong :-) Thanks, Willy
Received on Monday, 11 June 2012 09:17:07 UTC