W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > September to December 1994

Re: Two proposals for HTTP/2.0

From: Chuck Shotton <cshotton@oac.hsc.uth.tmc.edu>
Date: Thu, 17 Nov 1994 12:41:14 -0600
Message-Id: <aaf1505504021004cfd2@[]>
To: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
>I think allowing
>GET url HTTP/2.0
>makes sense just in terms of cleaning up the protocol, independently
>of the motivation of helping people who want to serve maultiple
>host-names from the same host.
>Servers don't really need to know their own names, as much as they
>need to be able to discover their own addresses, and, after doing the
>name lookup on a new hostname first ask "is this me?".  Servers will
>also need some way to discover their own port, though.

This is only true of servers on Unix implemented to run under inet. It
isn't the case on any other server on any other platform including
stand-alone Unix servers, because these servers already know what port they
are listening on.

Servers DO need to know host name and port info so they can pass it to CGI
applications which may need to generate self-referencing URLs. They just
don't need to find it out by forcing a wholesale change on the way clients
make requests to the server.

Imagine all of the software that will have to change, from clients and
servers to dedicated scripts, applications, etc., if the syntax of a GET
request changes to require a complete URL. Information contained in the URL
is redundant, given that servers already know their IP address, the
protocol they are communicating with, and the port number.

The ONLY missing piece of information is something that has NOTHING to do
with HTTP, HTML, or the WWW and everything to do with some strictly
commercial needs - namely the actual DNS name that was used to access the
server. As I said before, using the domain name to determine server
function may (or may not) be considered a hack, but it doesn't really have
anything to do with HTTP, per se. It has to do with some configuration
"tricks" that some server administrators feel they need to do to make
customers happy. I'm all for that, but I think that the appropriate
mechanism should be chosen and munging the HTTP request syntax isn't it.

Bottom line is that it would be a lot easier to look for a new request
header field than to have to add a bunch of conditional code to process a
different request syntax for HTTP/1.0 vs. HTTP/2.0. The two protocols will
not be forward/backward compatible if a syntax change is made to the
request, causing a lot of headaches for everyone. I suggest avoiding the
headaches altogether and simply define the new request header field.

Can someone point out a good reason NOT to accomodate the need for sending
a host name by putting it in a required header field as part of a complete
URL? If there's something I'm overlooking, I'll gladly stop whining.

Chuck Shotton                             \
Assistant Director, Academic Computing     \   "Shut up and eat your
U. of Texas Health Science Center Houston   \    vegetables!!!"
cshotton@oac.hsc.uth.tmc.edu  (713) 794-5650 \
Received on Thursday, 17 November 1994 11:45:07 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:16:10 UTC