W3C home > Mailing lists > Public > ietf-discuss@w3.org > January 1999

Re: Looking for comments on the HTTP Extension draft

From: Henrik Frystyk Nielsen <frystyk@w3.org>
Date: Sun, 03 Jan 1999 16:37:55 -0500
Message-Id: <>
To: Ted Hardie <hardie@equinix.com>
Cc: hardie@equinix.com, masinter@parc.xerox.com, Chris.Newman@innosoft.com, discuss@apps.ietf.org
At 09:48 12/28/98 -0800, Ted Hardie wrote:

>	Thanks for your reply to my comments; I believe, however, that
>some inexactness on my part may have led you astray, as we still seem
>to be talking at cross-purposes on some issues.  I will endeavor below
>to clarify those areas.

Sorry for the delay - on my part, I hope my response is more precise this

>	Before going into the details, however, I want to make clear
>that I was not proposing that the HTTP *minor* version number be
>revved, but its *major* version number.  This document seems to me to
>propose a framework that will encompass changes sufficiently great to
>warrant that change--the landscape of the web will be changed
>completely if this is adopted, and that calls to me for some pretty
>clear signposts.  
>	I will repeat that I would not see the need for the rev if the
>document did not allow responses to contain extensions based on "a
>priori" knowledge.  Without that "a priori" extension, HEAD, OPTIONS,
>or the other methods you describe would suffice.  Without a very clear
>specification of very specific cases where those "a priori" extensions
>are allowed, they will be used in many, many cases that we cannot now
>foresee.  That means a big, big flag needs to be raised.

According to your reasoning here, HTTP/1.1 would have to called HTTP/2.0.
Think of the Vary header field. The server may select a representation
based not only on any request header field but also on factors completely
outside of the request itself (indicated by using the "*" value in the Vary
header field). In other words, the server may base the selection on apriori
knowledge about what the client can handle. It is impossible to describe
the open ended set of ways this information is passed to the server.

>> Furthermore, there is no reason why an application MUST read the
>> declaration first - the ns prefixed header fields are just unknown
>> extension header fields until the extension declaration has been
>> interpreted. This kind of "two-pass" header field parsing is not new to
>> HTTP - the same is the case for the connection header field and the
>> cache-control header field, for example.
>I did not say it must *read* the declaration first; I said it must
>*process* it first.  Whether that is a one pass or a two pass header
>field parsing makes no difference to the requirement.  As written, if
>a Man: or an Opt: header exists, it must be processed with its
>namespace headers first, as the meanings of the other headers may
>change based on the extensions.

The change is actually not a *may* but a *will* - the header fields are by
definition *undefined* until they have been defined by the extension

This is in fact a very fundamental point, not only about the extension
mechanism but about HTTP itself: a header field doesn't mean anything
unless it has been defined somewhere. That a header field may have an
English sounding name is really just a coincidence, it could as well be a
number instead. In your example, it doesn't matter that you know what
"warm" means - unless it has been defined as a property of beer and weather
then you can not talk about warm beer and warm weather.

Precisely because of this, the situation you are referring to can only
occur if two individual extensions define the header field "Warm" to mean
two different things - a situation that the extension framework explicitly
prevents by using name spaces.

The major problem wrt extensibility in HTTP right now is that the only way
to define a header field is to write a standards track RFC which we all
know takes a lot of time and leaves no way of gracefully deploying the
feature in the mean time.

The extension framework in fact allows for this and (if deployed) can make
it easier for future extensions to be deployed in a step by step manner -
not only by defining what the extension means but also by specifying which
parameters are associated with these extensions.

>This is, in fact, a point I don't feel I am making well, and I'd like
>Larry to speak to it if he can.  During the CONNEG meetings in Orlando,
>he expressed very well the principle that the meaning of a feature must
>not depend semantically on the values of *other* features (so "Warm" must
>mean the same thing whether we are talking about beer or the weather).  

I am not sure what Larry said, but having warm mean the same for both beer
and weather is exactly having the meaning of one feature depend
semantically on more than one feature, right?

>>   3.  If 2) did not result in a 510 (Not Extended) status code, then
>>       strip the "M-" prefix from the method name and process the
>>       remainder of the request according to the semantics of the
>>       extensions and of the existing HTTP/1.1 method name as defined in
>>       [5].
>Again, I don't seem to be getting my point across here.  The example I
>gave was possibly too local.  I was trying to imply that the apparent
>method for extending a method might seem to imply constraints which
>aren't there.  In the M-GET example, you may have the base semantics
>of GET, but you also have the possibility of secondary effects (like
>the M-GET popping a stack and causing a new value to be present at the
>URL) which substantially change the original semantics and cause
>previously expected characteristics of GET (like idempotence) to
>change.  The implication of this is that anyone examining a method for
>security reasons (like a firewall administrator) cannot rely on the
>method to the right of the M- for any real expectations of the
>method's semantics.

As Yaron points out - this is a feature and not a bug. M- defines a new
method which can only be considered a subclass of an existing method if the
extension declaration is understood by the recipient.

>> I can think of plenty of examples where this is not the case - for
>> example published papers, released software packages etc. However, you
>> are right that this is an issue and extension designers have to be
>> careful (as should everybody else) when selecting a spot in the URI name
>> space. This is also the reason for the careful wording in [2] section 
>> 8:
>> <paraindent><param>left</param>It is strongly recommended that the
>> integrity and persistence of the extension identifier be maintained and
>> kept unquestioned throughout the lifetime of the extension. Care should
>> be taken not to distribute conflicting specifications that reference the
>> same name. Even when an extension specification is made available at the
>> address of the URI, care must be taken that the specification made
>> available at that address does not change over time. One agent may
>> associate the identifier with the old semantics, and another might
>> associate it with the new semantics.
>> </paraindent>
>The fact that people get identifiers right in other contexts doesn't
>really change the fact that this a major change to how URLs are used
>in the current web context.  Given that current context, I am afraid I
>find "strongly recommended" to be weakly worded.  It's not even a
>SHOULD requirement, and I believe that it ought to be the most
>strongly worded MUST we can design.  Without that strong requirement,
>interoperability is based on the good will of the market players, some
>of whom will have strong disincentives to admit some kinds of changes.

Why don't you consider paper publishing and software release to be part of
the current Web? In any case, putting a timeline on URIs identifying
extensions is for our purposes a social and not really a technical problem
- I believe this has been discussed at length (say, the last 5 years) in
the URN community. Note that this in fact can happen with a central
registry just as well - some MIME header fields are defined differently by
different protocols.

It is not the task of specification writers to prevent people from shooting
themselves in the foot - I for one would not take on this responsibility!

>> >4) The content negotiation implied by the document is also not
>> >workable within the current CONNEG framework, because the set
>> >intersection model CONNEG uses presumes that the resource is intended
>> >for a single purpose; it has no provision for a resource that is a
>> >token, a description, and machine-usable code.  In the current
>> >framework, a device selects among multiple values in a set
>> >intersection by q-value, not purpose.  It can't really select "one for
>> >this and one for that" in the same operation.
>> Unless this is different from HTTP then the q values describe the value
>> on the axis and not the dimension of the axis. q values can be applied to
>> any dimension be it type or some other property. In fact, the negotiation
>> hinted at here only spans the media type.
>> As metadata is moving on the Web and the ways of describing capabilities
>> get more powerful, so is content negotiation likely to get more powerful.
>> The extension framework doesn't depend on any particular content
>> negotiation mechanism (including no mechanism at all) and can actually be
>> used to introduce improved content negotiation schemes as they evolve.
>Your first point is exactly what I am trying to get across: q values
>describe the value on the axis and not *which* axis is being given the
>value.  For a content negotiation mechanism to handle the problem you
>propose, it would have to be able to designate the axis and the value.
>I am not aware of any content negotiation mechanism, current or
>proposed, that can handle that at the level of complexity your
>document implies.  The correct operation of content negotiation for a
>single-URI resource which potentially has everything from
>machine-executable code to multi-lingual, multi-character set
>descriptions is not an easy problem.  If you must imply that you want
>it, please be very sure that you describe it as an unsolved problem
>requiring further work.

Sorry, I wasn't clear - there is no way that you can define globally
applicable parameters that can be applied to arbitrary or even unknown
header fields. Parameters can be applied only if that header field has been
defined to support those parameters.

In fact, the example of q-values is a good example - it actually doesn't
work the same way for all accept* header fields. Some really only define
q-values as binary values, others define them as a value [0,1]. Look for
example closely at TE and accept-encoding - because of the default values
of certain values, anything but 0 or 1 doesn't make sense.

I see now where you are coming from - however, MIME doesn't work this way,
and if CONNEG assumes this to be the case then you have made me worried
about its feasibility.

>To be brutally honest, I believe that those who ought to be giving this
>framework the very careful review it deserves are simply too tired to
>go over it with the fine-toothed comb it needs.  We must be careful
>to get that review and those problems worked at before it is released,
>though, as the work required to fix this post facto would be enormous.

I think this in fact is a very bad excuse for not moving forward. The draft
(almost in its current state) has been around for a long time, there has
been a "last call" on the mailing list [1] Augsut 18, the comments that
came back have been integrated into the latest 01 draft [2] which was
released Nov 18. Furthermore, there is an extensive set of scenarios
available [3]. This is not moving at the speed of light - this is crawling
at the pace of a mole under ground.

The main reason for attempting to moving HTTP extensions forward on this
list and not only the HTTP community is precisely to get feedback from a
larger community of which many are attempting to use HTTP as a base
transport for various extensions. Remember that HTTP is not owned by the
HTTP WG - as much as we may like that idea.

Thanks for your comments!


[1] http://lists.w3.org/Archives/Public/ietf-http-ext/1998JulSep/0028.html
[3] http://www.w3.org/Protocols/HTTP/ietf-http-ext/Scenarios.html
Henrik Frystyk Nielsen,
World Wide Web Consortium
Received on Sunday, 3 January 1999 16:39:20 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:08:05 UTC