Re: multi-host virtual sites for HTTP 1.2

> From paulh@imc.org Tue Jul  9 16:56 PDT 1996
> 
> At 3:47 PM -0700 7/9/96, S.R. Venkatramanan wrote:
> >> From http-wg-request@cuckoo.hpl.hp.com Tue Jul  9 15:24 PDT 1996
> >>
> >> With the Host: header in HTTP/1.1 it is possible to efficiently and
> >> easily make a single host act as many "virtual" Web sites.
> >>
> >> What I'd like to see in HTTP/1.2 is the ability to easily and
> >> efficiently make multiple hosts act like a single virtual Web site, so
> >> that one can build huge web sites.
> >
> >I think this is just an implementation, or rather, a site setup/administration
> >problem than a protocol defeciency.  How about using a round-robin DNS
> >to map the hostname to multiple irons, use the power of such multiple
> >hosts in the cluster to serve nfs mounted files from the same or similar pool.
> 
> This misses a fair amount of the advantages suggested by Paul. For one,
> every host in the round-robin would have to have all of the same content;

Not really, if you nfs mount the contents across the hosts in the pool.

> Paul's suggestion allows partial redirection. Thus,
> http://www.megacorp/root1/... could be redirected to one host that has just
> that content, while http://www.megacorp/root2/... could be redirected to a
> different host with different content.

Just like this, megacorp/root1/... files can be on a disk on one host
and others can mount this filesystem.  megacorp/root2/... can be on a
different host's disks and can be mounted by other hosts including the
first one holding the files for megacorp/root1/...   I hope you get the
picture now.
> 
> Further, Paul's suggestion makes it easier to add redundancy.
> http://www.megacorp/root1/... could point to two or more hosts that are at
> very different locations. If host1 is down because of hardware or network
> failure, a good client would try host2 and so on. Of course, these hosts
> have to try to have the same content, but it would be much less (and
> therefore more likely to be the same) than for the entire MegaCorp tree.

This, again, can be accomplished very easily by nfs.  The automounter map
can specify the order in which the hosts holding the contents will be
accessed and thus any level of redundancy can be built in.

Having said all these, I should agree with Paul (paulle@microsoft.com)
in what he said in his followup mail.  i.e., The nature of content placement
and distribution (NFS) necessitates the data flow thru multiple network
interfaces which could be avoided by a protocol spec.  It is just like
having a provision in ftp to direct data transfers between a third host and
the initial host, instead of the ones that have control connection setup.

Well, again,  proxie servers take care of most of these situations
for static contents....

s.r.

PS:  I am not sure if this discussion is appropriate for this distribution.
     Larry, pl let me know if it is not and I will shut up.

========================================================
S.R.Venkatramanan				raven@Eng.Sun.COM
Staff Engineer					(415)786-9106
Internet Solutions Engineering			2550 Garcia Ave, M/S UMPK16-120
Sun Microsystems, Inc (SunSoft)			Mountain View, CA 94043-1100

Received on Wednesday, 10 July 1996 04:18:49 UTC