- From: Matthew Kerwin <matthew@kerwin.net.au>
- Date: Thu, 18 Aug 2016 09:35:13 +1000
- To: Joe Touch <touch@isi.edu>
- Cc: Willy Tarreau <w@1wt.eu>, Mark Nottingham <mnot@mnot.net>, tcpm@ietf.org, HTTP Working Group <ietf-http-wg@w3.org>, Patrick McManus <pmcmanus@mozilla.com>, Daniel Stenberg <daniel@haxx.se>
- Message-ID: <CACweHNC1qFH5DMnZRE87bAE5sk_P+1z1Fzm-9YEu=E2DULkaYQ@mail.gmail.com>
Hi folks, I'm stepping in here on just a couple of points. I'll snip the bits I can't or won't talk to. On 18 August 2016 at 08:23, Joe Touch <touch@isi.edu> wrote: > > > On 8/17/2016 2:13 PM, Willy Tarreau wrote: > > On Wed, Aug 17, 2016 at 11:31:33AM -0700, Joe Touch wrote: > >>> It can be cited in new RFCs > >>> to justify certain choices. > >> Hmm. Like the refs I gave could be cited in this doc to justify *its* > >> choices? :-) > > I think it would be nice that this is cited, but to be clear on one > > point, I've never heard about your papers before you advertised them > > here in this thread, > > A search engine on the terms "TCP HTTP interaction" would have popped > them up rather quickly. > > > and yet I've been dealing with timewait issues > > for 15 years like many people facing moderate to large web sites > > nowadays. > > " > > timewait issues" and we're the 5th hit in Google. > > Google, unless it's changed again recently, tailors search results for the user. My first page of hits for that query are all serverfault.com, superuser.com, serverframework.com, stackoverflow.com, etc. Guess where I get most of my advice. > > >>>> Yes, and discussing those issues would be useful - but not in this > >>>> document either. > >>> Why ? Lots of admins don't understand why the time_wait timeout remains > >>> at 240 seconds on Solaris with people saying "if you want to be > conservative > >>> don't touch it but if you want to be modern simply shrink it to 30 > seconds > >>> or so". People need to understand why advices have changed over 3 > decades. > >> The advice hasn't really changed - the advice was given in the 99 ref, > >> which includes some cases where it can still be appropriate to decrease > >> that timer. > > Most people see it the other way around : they see no valid case to > *increase* > > it beyond a few seconds, because for them the default value should be > extremely > > low (ie this firewall vendor several years ago trying to insist on one > second). > > Yes that's really sad but that's reality. And you can tell them to read > 6191 > > they won't care. > > Most people's servers don't need to run fast enough to care (note that > nearly everyone runs some sort of web server on nearly every device, > whether for control or configuration). The only issue are high-volume > servers (the kind sysadmins deal with), and those people tend to already > know what the tradeoffs are and accept the risks. > > Your sysadmins are not like my sysadmins. But these are generalisations and anecdotes. > > > >>> - TCP timestamps: what they provide, what are the risks (some people > in > >>> banking environments refuse to enable them so that they cannot be > used > >>> as an oracle to help in timing attacks). > >> That's already covered in the security considerations of RFC 7323. How > >> is HTTP different, if at all, from any other app? > > HTTP is special in that it is fairly common to have to deal with tens of > > thousands of connections per second between one client and one server > when > > you are on the server side, because you place a number of gateways (also > > called reverse-proxies) which combine all of the possible issues you can > > think of at a single place. > > There are lots of services that have that many transactions - DNS > servers (even local ones), remote databases, etc. > > The point is that HTTP doesn't make the problem different, so this isn't > an HTTP issue. It's a high rate server issue. > > What makes HTTP different is that I expect most high-rate applications would exist in a context where the people running the servers and applications have some amount of specialist experience and knowledge, or at least an expectation that they're doing something that requires such knowledge, in high-rate throughput. HTTP is ubiquitous, and resides all along the traffic scale, from my website (~no bits per second) to google.com (all the bits); and the slide -- or sudden jump -- up that scale doesn't always correspond with acquiring expertise in TCP stack tuning. > > > So probably you're starting to see the benefit of having a single doc > > to concentrate all this. > The same reason it's useful to have this all in one place is the reason > we already do - there are books and courses on this. > > My users aren't getting my content right at the time my site is booming. I've just been slashdotted/hackernewsed/whatever. Do I enroll in a course? Buy a textbook and swot up (which I haven't done since I finished my IT degree 15+ years ago)? Or do I hit up serverfault.com and bash the keyboard until the fires go out? Summary information is really important. Having it published by the same people who published the protocol spec adds some serious cred, at least in my eyes. > > You provided at least 3 different articles > > to read and 2 or 3 different RFCs in addition to the original ones, > > of course. A hosting provider whose web sites are down due to a lack > > of tuning doesn't start to read many very long articles and even less > > the most scientific ones, they need to find quick responses that they > > can apply immediately (matter of minutes). So they launch google, they > > type "web site dead, time-wait overflow" and they get plenty of > > responses on stackoverflow and serverfault, many from people having > > done the same in the past and repeating the same mistakes over and over. > > These people don't read RFCs to fix problems. They take online courses > or read "how to" books - which do already exist in this space. > > Which people? I tend to google the error and see if there's some sort of consensus on stackoverflow. And then, as often as not, I have to advise my sysadmins of a course of action because they know as much as me, or they don't want to deal with my situation, or some other reason. > > A document validated by several people and giving links for further > > reading can help improve this situation. > Those are the books and courses I'm talking about already. > > > > People rediscover wheels because it's hard to find simple and accurate > > information on the net. > Nobody looks to RFCs to solve that problem... > > > Basically you have the choice : > > - either uneducated blog posts saying "how I saved my web site using 2 > > sysctls" > > - or academic papers which are only understandable by scientific people > > having enough time > > ... that's what net FAQs are for, as well as courses and books. > > > At least the first ones have the merit of being easy to test, and since > > they appear to work they are viral. > > > >>> All of them became issues for > >>> many web server admins who just copy-paste random settings from various > >>> blogs found on the net who just copy the same stupidities over and over > >>> resulting in the same trouble being caused to each of their reader. > >> This doc is all over the place. > >> > >> If you want a doc to advise web admins, do so. > > That's *exactly* what Daniel started to do when you told him he shouldn't > > do it. > > I didn't say a doc to advise web admins wasn't useful. I said it wasn't > an RFC. > > It's a web FAQ, a book, etc. > > Here's the crux of the issue. What do you think an RFC is, that we (apparently) don't? Why is an informational RFC not allowed to present the same sort of information as an FAQ? (Isn't that what a BCP is?) Personally I'd be happy if it was written up exactly like an FAQ, and published as an informational RFC; because that tells me that this FAQ was published by the IETF. It's not some random dude's unreliable blog full of cargo-cult advice and anecdotes; it met the consensus of the organisation that published the HTTP protocol spec. It's legitimate and reliable. And one day, when it's out of date, it'll be updated or obsoleted by the new consensus wisdom of the IETF. That, as far as I know, doesn't defy the official definition of what an RFC is*, nor does it devalue any existing (or future) RFCs. And Google indexes RFCs. If this ends up being a really useful document (which I imagine it would), with lots of inbound links and references, people won't need to "look to RFCs to solve [their] problem", they'll do what they already do -- look to Google, and Google will point them to this RFC. * heh > > > and I find it fantastic to see that this protocol > > still scales so well. > > But we need to consider modern usages of this protocol > > for the web, and not just academic research and e-mail. > You might consider that TCPM and TSVWG don't exist for just "academic > research and e-mail". What do you think we've been doing for the past 40 > years? > > Dunno, I was only born 35-odd years ago. ;) Shall we talk about generation gaps? Greybeards vs millennials? (Of which I'm neither, BTW. Not yet, at least.) Where we come from, how we find information to solve our problems, the way we view the IETF and its RFCs are all different. If this argument is just that this stuff doesn't belong in an RFC, that's a cultural issue and not one I think we can resolve in this one technical working group. Cheers -- Matthew Kerwin http://matthew.kerwin.net.au/
Received on Wednesday, 17 August 2016 23:35:45 UTC