W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2015

Re: Browsers and .onion names

From: Jacob Appelbaum <jacob@appelbaum.net>
Date: Sat, 28 Nov 2015 11:56:56 +0000
Message-ID: <CAFggDF1nD1j1kn4YPH1kWBLPyVV28F4cNVuhqOJptmLSCg7GXg@mail.gmail.com>
To: Willy Tarreau <w@1wt.eu>
Cc: Mark Nottingham <mnot@mnot.net>, Cory Benfield <cory@lukasa.co.uk>, HTTP Working Group <ietf-http-wg@w3.org>
Dear Willy,

On 11/28/15, Willy Tarreau <w@1wt.eu> wrote:
> Hi Jacob,
> On Sat, Nov 28, 2015 at 09:53:06AM +0000, Jacob Appelbaum wrote:
>> > A lot of people
>> > use ".local" as the TLD for their local network. Someone might suddenly
>> > decide that ".local" must not be forwarded nor resolved for whatever
>> > reason
>> > and suddenly all compliant agents will break existing setups. You know
>> > better
>> > than any of us that a cleanly designed protocol doesn't require
>> > existing
>> > implementations to change to serve its purpose.
>> Uh, I'm not sure if you're telling a joke or not but this entire
>> process started because of .local as a Special-Use-Domain-Name:
>> https://www.iana.org/assignments/special-use-domain-names/special-use-domain-names.xhtml
> I didn't even know it was reserved. Once browsers start to block it, I know
> quite a number of people who will report breakage such as inability to
> access
> internal resources in their companies. Locally-administered TLDs are a
> missing
> feature to complement RFC1918 but that's out of the scope of this
> discussion.

I think that .local is reserved by RFC6762 which is around two and a
half years old. Most Apple computers have been using .local for mDNS
related stuff for a lot longer, which is how .local was reserved by
Apple in the first place. Most GNU/Linux systems with systemd also use

Will it break stuff? No, probably not, as the way .local is handled is
such that absent infrastructure, it Just Works (TM). Will it conflict
with other systems? Probably sometimes.

Does that mean .local is going away and that we need to ignore how it
is being used? I don't think so....

Does it mean that local admins who picked .local are probably sad
about that choice? Yes, I suspect you are correct, it would.

What shall we do here? Abandon all Special-Use-Domain-Names? What
about the other dozen? And why shall we do that?

>> Thus the Pandora's box opened up without notice, I guess? Perhaps it
>> is time to implement them both?
>> In any case, yes, we have Special-Use-Domain-Names and there is a list
>> that some applications need to handle in a special manner. The IETF
>> seems to be blocking all other new Special-Use-Domain-Names, so the
>> flood you've express concern about is unlikely to happen.
> Fine, at least the mess will be limited.

That isn't a certain thing just a temporary thing. There is work
underway to revise the process for Special-Use-Domain-Names. I
personally hope that .gnu and others will become

>> >> If they accidentally make .onion queries without configuring to use
>> >> Tor, they'll be unpleasantly surprised (and the consequences could be
>> >> much
>> >> worst, depending on their situation).
>> >
>> > So that basically means that Tor is unsafe without this ? Thus maybe
>> > using
>> > this DNS mechanism was a poor choice to start with, and it's a bit late
>> > to
>> > change all DNS agents just to fix the protocol's design issues.
>> >
>> No, Tor is safe and complies with RFC7686. Other browsers and software
>> that leak .onion names are now understood to be unsafe.
> So if they're safe, why should they implement this ?

Tor is safe. The thing that isn't safe is users attempt to resolve
host names that they are incapable of resolving. We should not pollute
the root servers with such queries and we should fail closed to
protect users. The mere queries of the label may be problematic as we
explained in our RFC...

We explained this in RFC7686 - if you read it and it is unclear, I'm
happy to clarify further; I'd rather not repeat that RFC in full

>> Just as time
>> moved on, many browsers don't implement HTTP 2. Or browsers which
>> still use SSLv2/SSLv3.
>> There are lots of changes happening in browsers - this is no different
>> - it is a security and privacy concern. It has been identified as a
>> concern that we can resolve by following RFC7686, no pun intended.
>> Browsers SHOULD implement it but as Mark has said: we have no RFC
>> police.
> But you understand the trouble and precedent it's setting up ? "yes I
> know you're not interested in this protocol but despite this you should
> implement its RFC". I could as well suggest that for the sake of any
> protocol of mine, browsers take care to send even number of bytes in
> any request and that proxies should block requests containing an odd
> number of bytes.

I hear and understand your concerns.

Could you propose a counter proposal for how to keep users safe as we
have done in RFC7686?

I'd be happy to support such an RFC and in two years, we'll probably
have a similar discussion about your proposal from different
stakeholders. :-(

> I think it would be easier to suggest browsers to support a blacklist
> of TLDs that should not be resolved nor passed to proxies and then let
> users decide what TLDs they want to block. Those who use .onion addresses
> probably know it.

And when a browser pre-fetches a list of urls, did the user know they
were a .onion user? No, of course not. There are dozens of similar
situations. We solve a real problem with RFC7686 and browsers, as well
as other software, have a duty of care to implement the solution. It
is a free choice, of course. There are no RFC police, only software
that is compliant with relevant RFCs or not.

Hope that helps,
Received on Saturday, 28 November 2015 11:57:26 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:40 UTC