W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

frequent DNS queries RE: comments on draft-mbelshe-httpbis-spdy-00

From: Dan Wing <dwing@cisco.com>
Date: Thu, 16 Aug 2012 22:21:14 -0700
To: 'William Chan (Dz)' <willchan@chromium.org>, "'Patrick McManus'" <pmcmanus@mozilla.com>
Cc: "'Phillip Hallam-Baker'" <hallam@gmail.com>, <ietf-http-wg@w3.org>
Message-ID: <0b5f01cd7c38$1f6ef2b0$5e4cd810$@com>
> -----Original Message-----
> From: willchan@google.com [mailto:willchan@google.com] On Behalf Of
> William Chan (???)
> Sent: Wednesday, August 15, 2012 9:50 PM
> To: Patrick McManus
> Cc: Phillip Hallam-Baker; ietf-http-wg@w3.org Group
> Subject: Re: comments on draft-mbelshe-httpbis-spdy-00
> 
> Can you clarify the reason to prefer srv over something like an
> Alternate-Protocol response header? I can see that Alternate-Protocol
> is suboptimal in that it requires waiting for a response first, but I
> have concerns about adding yet another DNS lookup. Chromium has already
> lowered our concurrent getaddrinfo() calls to 6 due to problems with
> home routers not being able to handle too many concurrent DNS queries.

I've heard of that problem.  I am guessing most of those problems are
on IPv4-only hosts, so the underlying DNS code is generating 6 DNS 
queries.  When the host supports IPv6, you'll have to drop that to 3
(A and AAAA queries).  

Or, do we hope to identify the problem and encourage vendors to fix
it? 

In the past, ICE (RFC5245) ran into similar problems which seemed to
be caused by rapidly sending UDP packets from different source
addresses, which forces a NAT to create a new mapping.  Some NATs
apparently can't do that very quickly.  

With many (but not all) NATs configured to do local DNS proxying,
the problem may not be too severe.  But I understand cable operators
don't like DNS proxying in CPE, which we may have exchanged for
dropping rapid-fire DNS queries instead.  That would not be good.

We tested some of our Linksys gear a week ago, after a discussion
at IETF, and couldn't repeat the problem.  So, I would sure like
to understand it better and try to see where things break and
what configuration is causing this problem.  

Then we can figure out if the best workaround is in the getaddrinfo()
code, in the application, or elsewhere.  But I would rather solve
the problem, especially for something as critical as DNS queries.

DNSSEC, DANE, and IPv6 being hopefully around the corner, we need
to fix problems with DNS queries and keep DNS working fast.

-d


> Adding more DNS lookups per hostname will further exacerbate the
> problem.
> 
> In any case, FWIW, I hope websites simply transition to https:// URIs
> instead :)
> 
> 
> On Wed, Aug 15, 2012 at 6:00 AM, Patrick McManus <pmcmanus@mozilla.com>
> wrote:
> 
> 
> 	On Tue, 2012-08-14 at 12:24 -0400, Phillip Hallam-Baker wrote:
> 
> 	> If we take architecture seriously, the primary signaling
> mechanism for
> 	> HTTP/2.0 should be some form of statement in a DNS record to
> tell the
> 	> client 'I do HTTP 2.0'. We might also have some sort of upgrade
> 	> mechanism for use when the DNS records are blocked but that
> should be
> 	> a fallback.
> 
> 
> 	This is my current thinking as well though I'm not tied to it..
> srv in
> 	the base case (with the possibility of dnssec) and something like
> 	upgrade/alternate-protocol over HTTP/1 as a slower fallback.
> 
> 
> 
> 
> 
> 
Received on Friday, 17 August 2012 05:21:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 17 August 2012 05:21:56 GMT