W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: HTTP 1.1 --> 2.0 Upgrade

From: Yoav Nir <ynir@checkpoint.com>
Date: Mon, 20 Aug 2012 17:46:31 +0300
To: Phillip Hallam-Baker <hallam@gmail.com>
CC: Willy Tarreau <w@1wt.eu>, Julian Reschke <julian.reschke@gmx.de>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-ID: <08FA2695-855A-4B14-BFE5-D6A245D85832@checkpoint.com>

On Aug 20, 2012, at 4:18 PM, Phillip Hallam-Baker wrote:

> I really don't see content filtering proxies as a problem
> 
> The reason being that anyone doing content filtering has to constantly
> keep ahead of the game and track the current status of the Web.

It's somewhere between "has to" and "should". 

> Facebook and Farmvile are only the fads of the season, there will be
> another and another. Remember MySpace, TheWorld.com, Yahoo, GeoCities?
> FB may survive but it certainly won't be the last big thing on the
> Net.

There are more fine-grained stages here. At first it was some game that PFYs were enjoying. Nothing to be concerned about. Later it was seen as a time waster where people would "like" and comment on their friends statuses. It would be trivial, even with a previous-generation firewall (or a router with an ACL) to block the FB addresses. The later stage is when administrators don't want just a "block" or "allow" option, but more fine-grained control - allow people to comment on friends' statuses, but not play a bunch of time-waster games. 

That is when the firewall vendors come out with the special content filtering features. That's months or years after the thing exists on the Net.

Then there's the issue of when administrators deploy it. Firewalls are generally not treated like Chrome. The customers are not running the latest version. They might very well be running a two- or three-year old version. 

So by the time that the intermediary is deployed even in well-managed networks, the thing may have been on the net for several years.

If Chrome and the Google websites deploy quickly, they're bound to bump into intermediaries that are not ahead of the game. That is why the NPN works so well. Any intermediary that does TLS proxying, but does not know about SPDY will drop the NPN extension, disabling SPDY. If SPDY was signaled through a DNS record, those connections would be dropped as invalid HTTP.

> So are these firewalls products or services? An actively maintained
> service is really not a problem at all, they will keep up to date. Its
> fire and forget products that cause grief.

Both kinds of products exist, as well as combinations. Firewalls are sometimes kept past their update date, and are sometimes not updated at all. Just as you can't rely on the browser not being IE6, you can't rely on the intermediary being up to date.

> Restrictive home routers on the other hand could be here for decades.
> [Well they could if they have started making them out of non
> substandard parts. Mine used to last about 7 months until I switched
> to Apple. I have a box full of Lucent, 3Com, Linksys, Netgear and
> Belkin routers that all turned out to be crap.]

My Netgear is 7 years old, and would still be going strong, except my cable modem died last month, and the cable company replaced it with one that includes a wireless router. Luckily it never tried to filter anything.

Yoav

> On Mon, Aug 20, 2012 at 6:27 AM, Yoav Nir <ynir@checkpoint.com> wrote:
>> 
>> On Aug 20, 2012, at 12:35 PM, Phillip Hallam-Baker wrote:
>> 
>>> I think we need a lot more explanation of what this intermediary chain
>>> is and which side of the wall it lies.
>>> 
>>> Configuration of server side proxies such as squid has zero impact on
>>> this issue. Server installations that support HTTP/2.0 will be
>>> appropriately configured.
>> 
>> Agree.
>> 
>>> Are client side proxies really more than a corner case issue today?
>> 
>> Yes, they're everywhere - hotels, airports, workplaces, coffee shops.
>> 
>>> When we first developed them the whole of CERN was sitting on a T1 and
>>> caching was a critical concern. Only a small percentage of the HTML on
>>> the Web is even cachable.
>> 
>> At CERN you didn't have lawyers telling you you had to get people to agree to terms and conditions. You probably didn't redirect them to a secure site where they could enter credit card numbers to get an hour of access either.
>> 
>>> Content Delivery Networks have caching but they work in a very
>>> different way to HTTP caching.
>>> 
>>> 
>>> I am pretty skeptical as to the value of firewall proxies when most
>>> let port 443 through.
>> 
>> Value is subjective, but TLS proxies have been available for six years, and are now part of most of the "next generation firewall" products. They inspect deeply enough that they can enforce a policy of "accept facebook, but not farmville". And they're getting installed in corporate networks all over the place. Obviously they need to understand the HTTP protocol, so we have to be careful of deploying something that they will block.
>> 
>> As long as there's a well-defined algorithms to figure out whether a particular stream is 1.1 or 2.0, those intermediaries can "grow" to deal with 2.0. But we also need client and server to be able to work together when an old intermediary is present that drops what it considers to be malformed HTTP.
>> 
>> Yoav
> 
> 
> 
> -- 
> Website: http://hallambaker.com/
> 
> Scanned by Check Point Total Security Gateway.
Received on Monday, 20 August 2012 14:47:15 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 20 August 2012 14:47:23 GMT