RE: Looking for comments on the HTTP Extension draft

At 13:10 12/28/98 -0800, Yaron Goland wrote:
><Defining the Problem>
>I suspect we all at least agree that there is a need for a mandatory
>extension mechanism. The functionality for this header being something on
>the order of "If you don't understand the header names specified in this
>header then you MUST fail this method."
>
>One could make the argument that if one needs to add a header with semantics
>that can't be ignored then one should change the method name and require
>that the new method not ignore this header. However the management
>complexities of this solution are exponential, thus this proposal is not
>acceptable.

Actually I think this is just as valid for XML namespaces although I don't
think this discussion belongs here.

<... deleted otherwise nice sales pitch for why HTTP extensions are needed
...>

><Henrik, put down that knife!>
>However, given the previous pointed out sub-optimalities, I'm not sure that
>the cure is all that much better than the disease. As such, I would like to
>propose an alternative design.
>
>1) The creation of a hierarchical namespace encoding for methods and headers
>ala what is being discussed in the URLREG WG. For example V!MS!FOO where V
>is a stand in for vnd or vendor.
>
>2) The creation of a standardized encoding mechanism to allow for the use of
>a fully qualified URIs as a method of header name. Because both use the
>token production, standard URI characters such as : and / can not be used
>without encoding.
>
>These two mechanisms will allow for decentralized extension without fear of
>collision, exactly as is being attempted now in the URLREG WG. The cost,
>however, of this mechanism is byte bloat. Mandatory's use of prefixing
>allows short names to be used with short prefixes which are then translated
>to their full names. In essence, mandatory has created relative URIs.
>However the cost is double processing the headers. Thus every mandatory
>aware proxy/firewall/server must process each request twice. There is a
>trade off to be made here. My proposal leverages the existing HTTP
>infrastructure as the cost of byte bloat. Henrik's proposal solves the byte
>bloat problem but at the cost of causing us to have to completely re-write
>our parsers to do double parsing. I suspect maintaining the current
>infrastructure is probably the better goal.

I have two religious and one practical objection to this proposal (I will
try and be polite as only a Dane can be). First the practical reason:

I don't understand the underlying logic behind this proposal. The proposal
seems to be an attempt of working around the problem that already deployed
servers throw away header fields as soon as they are parsed if they are
known to be unknown or at least not useful to the server in order to
generate the response.

For firewalls and proxies, this can not be the case. By definition all
unknown header fields are end-to-end header fields - that is, they must be
passed through to the other side regardless of the proxy acting as a client
or a server. However, this changes if the header field is mentioned in the
connection header field in which case the proxy will have to find the
header field again and zap it. That is, it must maintain at least a
reference to the header field while parsing the complete message.

In the case of an origin server there are already at least two scenarios in
HTTP/1.1 where the server must parse the complete request before being able
to determine whether the header field can be ignored or not. Granted, both
these cases are optional features that the server can ignore but more about
this later - first the two cases:

a) The If-Range only works if there also is a Range header field present
and must be ignored if this is not the case. That is, the If-Range can not
be thrown away before the full message has been parsed.

b) The interactions between the If-Modified-Since and If-None-Match
requires both to be used before the server can determine whether it can
generate a 304 response or not. This has been discussed on the HTTP mailing
list [1] - that I think the solution is rather complex is another question.

That is, a moderately advanced HTTP/1.1 server is likely to have at least
the hooks for returning to and handling already parsed header fields while
the request is still being parsed.

Therefore a more correct assessment is probably that "Some origin servers
will have to have extra hooks put in to their header parsers." Note here
that it is not required to maintain pointers to anything but header fields
starting with a digit.

Is this a big thing? As always, an absolute answer to this question is
impossible but relative to the additional work of encoding/decoding URIs
with a (new?) encoding, it doesn't seem so. Actually, it seems rather
lightweight compared to having to parse all header fields to see if it
looks like a URI in disguise.

On the religious side, here are my main points against your proposal:

b) Assuming that a central registry can be the authoritative source of
information for an open-ended set of extensions deployed on the Internet at
large is in my mind rather unrealistic. Sure, it may contain
unauthoritative information about vendor specific features but only that
vendor has the final information. HTTP Extension URI point directly at this
already hence avoiding multiple lookups.

b) The breakdown of independence between URIs and MIME headers. The
encoding of URIs must by definition be a function of the way we transport
them - in this case the MIME framework. This means that we forever tie
extension identifiers and MIME together. I don't like that feeling.

>The downside of my simplification proposal is that it doesn't provide a
>generic mechanism to say "The functionality represented by this URI is now
>associated with this method." Instead you have to use a header hack. You
>have to add a header with the specified URI and then include its name in
>Mandatory. I can live with this. How about you?
></Henrik, put down that knife!>

So, in summary, I am happy that you agree on the problem statement but I
don't think your proposal is any simpler than what we already have on the
table - in fact it is likely to be rather more complex.

Henrik (who is trying to imagine himself hunting chickens with a knife and
bear hands)

[1] http://www.egroups.com/list/http-wg/7416.html

--
Henrik Frystyk Nielsen,
World Wide Web Consortium
http://www.w3.org/People/Frystyk

Received on Sunday, 3 January 1999 19:26:24 UTC