W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: HTTP 1.1 --> 2.0 Upgrade

From: Phillip Hallam-Baker <hallam@gmail.com>
Date: Mon, 20 Aug 2012 09:18:51 -0400
Message-ID: <CAMm+LwjHZXGno+tNUcU0v-WpiU_MBoqAM7cDxb7eyYJev44ofw@mail.gmail.com>
To: Yoav Nir <ynir@checkpoint.com>
Cc: Willy Tarreau <w@1wt.eu>, Julian Reschke <julian.reschke@gmx.de>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
I really don't see content filtering proxies as a problem

The reason being that anyone doing content filtering has to constantly
keep ahead of the game and track the current status of the Web.
Facebook and Farmvile are only the fads of the season, there will be
another and another. Remember MySpace, TheWorld.com, Yahoo, GeoCities?
FB may survive but it certainly won't be the last big thing on the
Net.

So are these firewalls products or services? An actively maintained
service is really not a problem at all, they will keep up to date. Its
fire and forget products that cause grief.


Restrictive home routers on the other hand could be here for decades.
[Well they could if they have started making them out of non
substandard parts. Mine used to last about 7 months until I switched
to Apple. I have a box full of Lucent, 3Com, Linksys, Netgear and
Belkin routers that all turned out to be crap.]



On Mon, Aug 20, 2012 at 6:27 AM, Yoav Nir <ynir@checkpoint.com> wrote:
>
> On Aug 20, 2012, at 12:35 PM, Phillip Hallam-Baker wrote:
>
>> I think we need a lot more explanation of what this intermediary chain
>> is and which side of the wall it lies.
>>
>> Configuration of server side proxies such as squid has zero impact on
>> this issue. Server installations that support HTTP/2.0 will be
>> appropriately configured.
>
> Agree.
>
>> Are client side proxies really more than a corner case issue today?
>
> Yes, they're everywhere - hotels, airports, workplaces, coffee shops.
>
>> When we first developed them the whole of CERN was sitting on a T1 and
>> caching was a critical concern. Only a small percentage of the HTML on
>> the Web is even cachable.
>
> At CERN you didn't have lawyers telling you you had to get people to agree to terms and conditions. You probably didn't redirect them to a secure site where they could enter credit card numbers to get an hour of access either.
>
>> Content Delivery Networks have caching but they work in a very
>> different way to HTTP caching.
>>
>>
>> I am pretty skeptical as to the value of firewall proxies when most
>> let port 443 through.
>
> Value is subjective, but TLS proxies have been available for six years, and are now part of most of the "next generation firewall" products. They inspect deeply enough that they can enforce a policy of "accept facebook, but not farmville". And they're getting installed in corporate networks all over the place. Obviously they need to understand the HTTP protocol, so we have to be careful of deploying something that they will block.
>
> As long as there's a well-defined algorithms to figure out whether a particular stream is 1.1 or 2.0, those intermediaries can "grow" to deal with 2.0. But we also need client and server to be able to work together when an old intermediary is present that drops what it considers to be malformed HTTP.
>
> Yoav



-- 
Website: http://hallambaker.com/
Received on Monday, 20 August 2012 13:19:23 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 20 August 2012 13:19:30 GMT