draft-cohen-http-ext-postal-00.txt

draft-cohen-http-ext-postal-00.txt makes a number of points.  I'll
discuss three of them.

Point 1: `New protocols should be firewall-friendly, i.e. allow for
easy blocking/filtering'

I agree with point 1.

Point 2: `In order to be firewall-friendly, a new protocol should not
use the POST method plus a new MIME type, but a completely new method'

I disagree with point 2.  The draft argues that using a new method,
instead of a new MIME type, allows the firewall to be more efficient,
because it has to inspect a smaller part of the messages through it.

However, in the current internet environment, any decent firewall must
also inspect the MIME type if it is to be effective at enforcing
security policies, because a lot of insecure things, like GET
responses which carry native-code applets, are only detectable by
inspecting the MIME type.  The draft acknowledges this in section 8
but argues that this trend should be reversed.  However, I argue that
this trend is impossible to reverse.  Some software developers
independent of the IETF will continue to use POST (and GET) in an
effort to get the highest possible market penetration for their new
insecure and/or bandwidth-hogging multimedia formats and applications.
In fact, I will expect that future firewalls will become increasingly
sophisticated in order to filter out the stuff produced by above
software developers: there is an arms race going on here.

So, because of all the non-IETF protocols and data formats developers
out there, `don't use POST' will not allow firewalls to be any
simpler, and any efficiency gains will be limited to the case of
rejecting messages only.

In short, I would term any a new protocol as sufficiently
firewall-friendly if it uses either a new method or a new MIME type.

Point 3: `Any new (IETF) protocol should have the property that a
firewall will block it by default, and that explicit action is needed
by the firewall administrator to enable its use'.

[This point is stated most clearly in section 2:
  `While the designers of a new
   protocol may feel that their new protocol introduces no new risks,
   they do not have the right to decide what a PFB will support.'
]

I could not disagree more.  I feel that there are many cases in which
it would be quite legitimate for the IETF to decide that, for a
certain protocol, the default mode should be that `the average
liberally configured firewall' accepts the protocol.

Also, one should realise that the IETF does not operate in a vacuum.
If it makes deployment of its open standards much more difficult by
always requiring explicit action by all firewall administrators, then
proprietary de-facto standards, which were developed to work through
firewalls by default, will take over in many cases.

I feel that the IETF would shoot itself in the foot if it were to
adopt point 3 above as an absolute principle.  Adopting point 3 would
result in the IETF being unable to participate in, compete with, or
pre-empt some types of internet protocol efforts by independent
parties.  Also, it would make it much more difficult upgrade some of
the current `streaming protocols', which are built on top of
application/something GET responses, to real streaming protocols which
are more internet-friendly.


Now for some specific nitpicks:

Section 5: 

"Unfortunately, when the letters P.O.S.T. came into existence as an
HTTP method, the operational meaning was fairly specific in that it
was for HTTP form data submission."  This is not historically accurate
as far as I know.  In a 5 Nov 1993 Tim BL draft of the HTTP spec, POST
is defined as `Creates a new object linked to the specified
object. [...]  The new document is the data part of the
request. [...]'.  In fact, I believe that forms were first defined a
few years after POST.

Apart from that, I find the `preserve the purity of the original
design' arguments in section 5 and 6 not very compelling anyway.

Section 7:

Possible editing error: I believe that the `not' in the last line on
page 6 should not have been there.

Section 9:

Are you talking about apache as an origin server or as a proxy?


Some final comments:

This draft touches on the highly political issue of whether a software
developer has the `right' to try to bypass the security/bandwidth
allocation policies set by firewall administrators.  The draft answers
this question with a resounding `no', and I tend to agree.  However,
we have to realise that not all people will agree with this: there are
business models which depend on the current de facto `yes' staying a
`yes'.  Seen in this light, the whole issue bears a striking
resemblance to the issue of whether advertising servers have the
`right' to set persistent cookies in order to gather cross-site
statistics.  So I can predict with some confidence that, if this draft
goes forward, we will at some point have press releases in which
people ask the IETF not to endorse this draft because it would kill
the profitability of all kinds of useful content sites.

Also, though I agree with the `no', I do not want to take the extreme
position that all new protocols, even those which have no big new
security/bandwidth impact, should be created to be automatically
rejected by all firewalls.

As for the specific case of IPP: I reviewed the IPP security
considerations and I think that it is perfectly legitimate for IPP to
use POST.

Koen.

Received on Monday, 23 February 1998 09:37:40 UTC