Re: About that Host: header....

> 
> Balint Nagy Endre <bne@bne.dial.eunet.hu> said:
> > In theory Roy is right, in practice the situation is somewhat easier:
....
Roy T. Fielding:
> Now, please explain to me why option (2) is a better technical solution
> to the problems we are facing today.

I tried to explain, that (hopely) most servers can be tricked to accept full URLs
with changes in config files, even without looking at sources. 
Because all HTTP servers (serving files from a filesystem) have mapping rules
from web name space to filesystem name space, I hope that the situation is not
that critical to require accepting full URLs in 1.1 servers.
Of course this requires reconfiguration of all running servers, but that's not
that hopeless than upgrading them. (independently when this happens: now or somewhere
in the unforeseeable future.)

On the other hand: host headers are interpretable only by host-aware servers,
while full URLs are visible in every server.
Troubles may arise with trivial servers, which map URL-space / to a fixed
filesystem name, but clients may record how every server contacted reacts
to full URLs.

As I see, non-proxy aware clients should die rapidly, because the net can't keep up
with current web traffic:
I see home.netscape.com with about 100 cps, if my request doesn't get timed out,
other US servers are visible at about 300 cps, and I rarely see better
performance, however I will see my provides proxy-cache at full line speed (about 1400 cps
or sometimes even 1700 cps because ppp compression and V42bis compression works.)
Surely, when my provider installs some proxy cache, I will buy a 28.8Kbps modem, but not 
before. (Formerly I played around a bit with the CERN server as proxy on host ind.eunet.hu
- which is two hops away- , and the results were promising.)

When a proxy-aware client talks to a proxy, it will send necessarily full URLs.
The up-to-date proxy will forward that request according to the remembered type of the
origin server:
0.9 servers: will send 1.0 requests and will try other practices when sees a full-response.
(the server upgraded) 

1.0 servers: may try full-request using full URLs, upon failure record that the server
doesn't accept full URLs and re-test once a week or a month.
this means one failed request once a week or month on every 1.0 origin server per proxy
- a reasonable cost for rapid convergence to full URLs.

1.1 servers: full requests with full URLs.

1.1 clients may act similarly, but it is not neccessary.

If the WG sees enough motivation around to install (1.1 or better) proxy servers,
we can go forward with full URLs.

Hey HTTP gurus, is this approach usable?

Andrew. (Endre Balint Nagy) <bne@bne.ind.eunet.hu>

Received on Sunday, 24 March 1996 21:26:20 UTC