W3C home > Mailing lists > Public > www-talk@w3.org > January to February 1995

How do I properly embed "&" in HREF?

From: Skip Montanaro <skip@automatrix.com>
Date: Sat, 25 Feb 1995 17:01:00 -0500
Message-Id: <199502252201.RAA07014@dolphin.automatrix.com>
To: www-talk@www0.cern.ch

Sorry to bomb the list with this (especially since I don't read www-talk...)
but I'm getting desperate.  I have what's got to be a common question, but
in several hours of hunting around the Web today and scrounging through
www-talk archives I haven't been able to get an answer to the following.

Is their one universally acceptable (I hesitate to use the term "correct")
way to embed "&" or other special characters in HREF attributes?  Here's a
specific example: I generate lists of CGI-type anchors in response to a
number of different queries to the Musi-Cal database, for instance, <a
href="http://www/calendar.com/cgi-bin?city=Basking%20Ridge&state=NJ">Basking
Ridge</a>.

This worked fine until a WinMosaic 2.0.0a9 user reported that it wasn't
working.  After scratching around off and on for a few days, I saw a note in
one of the comp.infosystems.www newsgroups that said to use "&amp;" instead.
I tried it.  So now I have <a
href="http://www/calendar.com/cgi-bin?city=Basking%20Ridge&amp;state=NJ">Basking
Ridge</a>.  Worked fine with Netscape 1.0N.  (Doesn't everything?
<blink>:-)</blink>).  Worked fine with X Mosaic 2.1 and Lynx 2.3 as well. I
checked with my user.  Worked fine with him.  Okey-dokey.  Looks good. Into
production.  Now another user's browser is choking on "&amp;" (can't recall
which one at the moment - it's sort of immaterial at this point).  He
suggested "%26" instead.  ARRRRGGGGH!

Before I make another apparently major blunder, is there a single
universally accepted version of "&" I can embed in my HREFs that the current
versions of all browsers (e.g., I really don't care about X Mosaic 2.1
anymore, although I still use it occasionally).  Failing that, is there a
list somewhere of what the various browsers will accept?

I started writing Python code to handle all the friggin' HTTP_USER_AGENT
formats available -- what a mess! (HTTP_USER_AGENT, not Python) -- so I
could work around this problem dynamically.  I have been laboring under the
assumption that the column labelled "CharAmpersand" in
<URL:http://www.research.digital.com/nsl/formtest/stats-matrix.html> is
somehow related to this problem.  If not, please let me know...

Help! (and thanks),

--
Skip Montanaro		skip@automatrix.com			  (518)372-5583
Musi-Cal: http://www.calendar.com/concerts/ -or- concerts@calendar.com
Internet Conference Calendar: http://www.calendar.com/conferences/
eturn-Path: pitkow@cc.gatech.edu 
Return-Path: <pitkow@cc.gatech.edu>
Received: from burdell.cc.gatech.edu by www19 (5.0/NSCS-1.0S) 
	id AA02436; Sat, 25 Feb 1995 17:45:11 +0500
Received: from hapeville.cc.gatech.edu (pitkow@hapeville.cc.gatech.edu [130.207.119.215]) by burdell.cc.gatech.edu (8.6.10/8.6.9) with ESMTP id RAA29529 for <www-talk@www19.w3.org>; Sat, 25 Feb 1995 17:45:10 -0500
Received: (from pitkow@localhost) by hapeville.cc.gatech.edu (8.6.10/8.6.9) id RAA20041 for www-talk@mail.w3.org; Sat, 25 Feb 1995 17:45:08 -0500
From: pitkow@cc.gatech.edu (James Pitkow)
Message-Id: <199502252245.RAA20041@hapeville.cc.gatech.edu>
Subject: WWW User Survey Results
To: www-talk@www19.w3.org
Date: Sat, 25 Feb 1995 17:45:06 -0500 (EST)
X-Mailer: ELM [version 2.4 PL23]
Content-Type: text
Content-Length: 948       


*********************************************************************

	     ANNOUNCE: GVU's 2nd WWW User Survey Results

*********************************************************************

Hello,

   The Graphics, Visualization, & Usability Center (GVU) is proud
to announce the results of the Second World-Wide Web User Surveys.  
Specifically, the following materials and results are now available from:

<URL:http://www.cc.gatech.edu/gvu/user_surveys/User_Survey_Home.html>

   o the complete datasets for all the surveys

   o analysis of the Consumer Survey Pre-Tests 

   o access to the original adaptive surveys

   o graphs of the results from the General Demographics, Authoring, 
     and Browser Usage surveys

   o instructions for joing the www-surveying mailing list

Thanks,

GVU's WWW Surveying Team

www-survey@cc.gatech.edu
Graphics, Visualization, & Usability Center 
Georgia Institute of Technology
Atlanta, GA 30332-0280
eturn-Path: web@sowebo.charm.net 
Return-Path: <web@sowebo.charm.net>
Received: from sowebo.charm.net by www19 (5.0/NSCS-1.0S) 
	id AA03575; Sat, 25 Feb 1995 20:39:39 +0500
From: web@sowebo.charm.net
Message-Id: <9502252053.AA01537@sowebo.charm.net>
Received: from web by sowebo.charm.net; Sat, 25 Feb 95 20:53 EST
Subject: Ask Dr.Web
To: www-talk@www19.w3.org
Date: Sat, 25 Feb 1995 20:53:09 -0500 (EST)
>From: "CyberWeb" <web@sowebo.CHARM.NET>
>From: Dr.Web@Stars.com
Url: http://WWW.Stars.com/
X-Mailer: ELM [version 2.4 PL23]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 1203

	Soon after I initiated the Web Developer's Virtual Library at my
	site, I started getting random technical questions from visitors
	who evidently thought I might have some good answers.. later, I
	decided that I might as well give this Q&A service a name, so I
	labelled it "Ask Dr.Web". Well, of course, that only encourages
	everyone, and now I'm getting several questions a day.

	The problem with this is that to deal adequately with each question
	takes time, and I don't have enough to spare. So, I'd like to expand
	the "practice" with a few volunteer partners. I would estimate that
	with the load divided between us, your time commitment need not
	exceed 10 - 15 minutes/day.

	If you are interested in helping out, please contact me, and unless
	you're a household name (e.g. for writing the HTML 2.0 spec :*) then
	please provide some evidence that you are a well-qualified web doctor.
	Thanks!
      ___________________________________________________________________
      Dr.Web@Stars.com -=*<URL:http://WWW.Stars.com/>*=- 1 (301) 552 0272
      Web Developer's Virtual Library * CyberWeb SoftWare * WWW Databases
      HTML * CGI * Training * Transatlantic Liaison * Per Ardua, Ad Astra
eturn-Path: ses@tipper.oit.unc.edu 
Return-Path: <ses@tipper.oit.unc.edu>
Received: from tipper.oit.unc.edu by www19 (5.0/NSCS-1.0S) 
	id AA03948; Sat, 25 Feb 1995 21:41:50 +0500
Received: from localhost.uucp by tipper.oit.unc.edu (SMI4.1/FvK 1.02)
          id AA05347; Sat, 25 Feb 95 21:41:48 EST
Message-Id: <9502260241.AA05347@tipper.oit.unc.edu>
To: www-talk@www19.w3.org
Subject: Hey - it's back.
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Sat, 25 Feb 95 21:41:47 -0500
From: Simon E Spero <ses@tipper.oit.unc.edu>
content-length: 267

WWW-TALK seems to be alive again. Has mail been lost whilst it was dead
or is the new machine slowly going through the backlog? If this message
appears soon, I assume the queue has been lost - otherwise, there's going 
to be a lot of full mailboxes on monday.

Simon
eturn-Path: ses@tipper.oit.unc.edu 
Return-Path: <ses@tipper.oit.unc.edu>
Received: from tipper.oit.unc.edu by www19 (5.0/NSCS-1.0S) 
	id AA04545; Sat, 25 Feb 1995 22:39:01 +0500
Received: from localhost.uucp by tipper.oit.unc.edu (SMI4.1/FvK 1.02)
          id AA05496; Sat, 25 Feb 95 22:38:59 EST
Message-Id: <9502260338.AA05496@tipper.oit.unc.edu>
Cc: Multiple recipients of list <www-talk@www19.w3.org>
Subject: Re: Hey - it's back. 
In-Reply-To: Your message of "Sat, 25 Feb 95 22:01:51 +0500."
             <9502260241.AA05347@tipper.oit.unc.edu> 
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Sat, 25 Feb 95 22:38:58 -0500
From: Simon E Spero <ses@tipper.oit.unc.edu>
content-length: 355

Oh dear, it seems that mail has been lost :-(


At least the mailing list is back.  After all the trouble at
CERN it's amazing it came back this quickly.  I wonder if some
disgruntled employee ripped the wires out the back of the mail
machine.  I not sure which his worse-postal workers, or particle
physicists...

Simon // this message composed by voice
eturn-Path: roberts@165.113.1.22 
Return-Path: <roberts@165.113.1.22>
Received: from mail.crl.com by www19 (5.0/NSCS-1.0S) 
	id AA04848; Sat, 25 Feb 1995 23:15:20 +0500
Received: from [199.4.94.247] (netwings.com) by mail.crl.com with SMTP id AA11382
  (5.65c/IDA-1.5 for <www-talk@www19.w3.org>); Sat, 25 Feb 1995 20:14:12 -0800
Date: Sat, 25 Feb 1995 20:14:12 -0800
Message-Id: <199502260414.AA11382@mail.crl.com>
From: "Roy L. Roberts"  <roberts@165.113.1.22>
Reply-To: "Roy L. Roberts"  <roberts@crl.com>
To: www-talk@www19.w3.org
Subject: Basic authentication/encryption methodology
content-length: 680

I apologize to those of you to whom this request is inappropriate, but
I'm stumped. I'm having a truly terrible time trying to determine what
the real algorithm is for basic authentication. The 1.0 draft says uuencoded,
the Cern pages refer me to 1421, but when I try these approaches the output
doesn't match the encoding created by current browsers. I know I'm
missing something really basic. If someone with more knowledge and a
clearer head would throw me some pointers I'd really appreciate it.

Roy 

Roy L. Roberts       NetWings...Harnessing the Power of the WWW
roberts@crl.com                 NetWings Info Site
(707) 874-1448           <http://netwings.com/nest.html>

eturn-Path: fielding@avron.ics.uci.edu 
Return-Path: <fielding@avron.ics.uci.edu>
Received: from paris.ics.uci.edu by www19 (5.0/NSCS-1.0S) 
	id AA06739; Sun, 26 Feb 1995 03:18:47 +0500
Received: from avron.ics.uci.edu by paris.ics.uci.edu id aa05007;
          26 Feb 95 0:15 PST
To: Multiple recipients of list <www-talk@www19.w3.org>
Subject: Re: Basic authentication/encryption methodology 
In-Reply-To: Your message of "Sat, 25 Feb 1995 23:33:11 +0500."
             <199502260414.AA11382@mail.crl.com> 
Date: Sun, 26 Feb 1995 00:15:38 -0800
From: "Roy T. Fielding" <fielding@avron.ics.uci.edu>
Message-Id:  <9502260015.aa05007@paris.ics.uci.edu>
content-length: 946

> I apologize to those of you to whom this request is inappropriate, but
> I'm stumped. I'm having a truly terrible time trying to determine what
> the real algorithm is for basic authentication. The 1.0 draft says uuencoded,
> the Cern pages refer me to 1421, but when I try these approaches the output
> doesn't match the encoding created by current browsers. I know I'm
> missing something really basic. If someone with more knowledge and a
> clearer head would throw me some pointers I'd really appreciate it.

The correct algorithm is base64 -- the same base64 that is described
in the MIME spec (RFC 1521) and PEM (RFC 1421), but without any line breaks. 
This is fully described in the next revision of the HTTP/1.0 draft.


......Roy Fielding   ICS Grad Student, University of California, Irvine  USA
                                     <fielding@ics.uci.edu>
                     <URL:http://www.ics.uci.edu/dir/grad/Software/fielding>
eturn-Path: yezdi@media.mit.edu 
Return-Path: <yezdi@media.mit.edu>
Received: from dxmint.cern.ch by www19 (5.0/NSCS-1.0S) 
	id AA10081; Sun, 26 Feb 1995 12:23:34 +0500
Received: from www0.cern.ch by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA03778; Sun, 26 Feb 1995 18:23:31 +0100
Received: from dxmint.cern.ch by www0.cern.ch (5.0/SMI-4.0)
	id AA18980; Sun, 26 Feb 1995 18:23:30 --100
Received: from media.mit.edu by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA03774; Sun, 26 Feb 1995 18:23:29 +0100
Received: by media.mit.edu (5.57/DA1.0.4.amt)
	id AA23119; Sun, 26 Feb 95 12:23:28 -0500
From: Yezdi Lashkari <yezdi@media.mit.edu>
Message-Id: <9502261723.AA23119@media.mit.edu>
Subject: WEBHOUND WWW Interface ready
To: www-talk@www0.cern.ch (www-talk)
Date: Sun, 26 Feb 95 12:23:28 EST
Cc: yezdi@media.mit.edu (Yezdi Lashkari)
Content-Length: 799

I couldn't send this earlier as the www-talk listserver
was down. 

   The WWW Interface to WEBHOUND is ready. You no longer need to
   install the modified browser or client to use WEBHOUND.
 
   http://webhound.www.media.mit.edu/projects/webhound/www-face/
 
   Please use it to seed the database with your favourite documents.

I'm currently working on making the document filtering algortithm
more accurate. I'm also working on making the client both much 
easier to use as well as trivial to install. 
Meanwhile please use the WWW interface.
 
Comments, feedback appreciated.
 
Yezdi

ps: For those of you who didnt read my earlier message, WEBHOUND is 
a WWW document filtering system that works on the principle of 
automated collaborative filtering. Info is available through the URL
above.
eturn-Path: kipp@lennon.cc.gatech.edu 
Return-Path: <kipp@lennon.cc.gatech.edu>
Received: from burdell.cc.gatech.edu by www19 (5.0/NSCS-1.0S) 
	id AA10944; Sun, 26 Feb 1995 14:42:15 +0500
Received: from lennon (root@lennon.cc.gatech.edu [130.207.9.20]) by burdell.cc.gatech.edu (8.6.10/8.6.9) with ESMTP id OAA08122; Sun, 26 Feb 1995 14:42:12 -0500
Received: from lennon.cc.gatech.edu (kipp@localhost.cc.gatech.edu [127.0.0.1]) by lennon (8.6.10/8.6.9) with ESMTP id OAA03358; Sun, 26 Feb 1995 14:42:10 -0500
Message-Id: <199502261942.OAA03358@lennon>
To: www-talk@www19.w3.org
Subject: NCSA server performance patch
Cc: kipp@cc.gatech.edu
X-Face: D)Y%J",s^I"S+E'bQ-Wfa'9iIB06{rJHw~d^k2k`t+$Y\Lm+8B[a\6*e;F:H2"{S[(`JDw-
 AwTk[;w:5#~Y$:d'SDZ2U%V#t@*fr0um)w#AR+Ms`%li{]z1<,$!.+J\|EdG(E(;xG-P!WHouD{d\i
 7Na7Q?+o^@b[`*d`=/m&vB;+H6|S{io{b>F?t8&mNJ*_oepwKvGHcR!TAW9UQ1bx!5MyMPkiTe}w=a
 ~bl,shd/7<@Pw,4jHM(]W^XectgTQ7[)\DQ]bbXNF(YU4{c?mo^gPf_tGasyb&}97(cf@_B1Y'",vZ
 >
X-Mailer: exmh version 1.5beta 8/10/94
Date: Sun, 26 Feb 1995 14:42:08 -0500
From: Russel Kipp Jones <kipp@cc.gatech.edu>
content-length: 2927

The past couple of months have seen our NCSA server performance degrade
considerably.  The included patch should improve most servers, but 
those with large yp group files and/or those experiencing multiple 
hits/second should see the most gain.

The included patch gave us an order of magnitude improvement.  
Technical details as well as the diff file are included below.  

Please let me know if you have any questions.

Thank you,

Kipp Jones
-----------------------------------------------------------------------------
kipp@cc.gatech.edu         <URL:http://www.cc.gatech.edu/grads/j/Kipp.Jones/>
Graduate Research Assistant,  Computing and Networking Services, Georgia Tech 
  "Gather your courage and your list of networking information and continue"
							-Greg Hankins 
-----------------------------------------------------------------------------

Technical Details:

We had been experiencing considerable delay on our server connections.
Discovered that it was the server was doing an initgroups call for
each fork'd process.  As we use yp, and our group file keeps growing,
AND yp is single threaded, all of the accesses were getting queued
up waiting for ypserv.

We tweaked the code to allow us to only do the initgroups
call once, and use that information the remaining times.  As the
uid/gid is always the same, this is sufficient.

The improvement for us was in the range of 2-3 seconds/connection, a
very considerable amount.  The performance improvement experienced
by others will vary depending on the number of hits, the size of the
group file, and the yp activity.

----------------------------------------------------------------------------
diff for httpd_1.3/httpd.c
----------------------------------------------------------------------------
10a11
> #include "sys/param.h"
131a133
>     int ngroups, groups[NGROUPS];
159a162,178
>     /* Figure out which groups we're in for later use*/
>     if (!getuid()) {
> 	struct passwd* pwent;
> 	int nsavegroups, savegroups[NGROUPS];
> 
> 	if ((nsavegroups = getgroups(NGROUPS, savegroups)) == -1)
> 	    die(SERVER_ERROR,"couldn't save current groupids", stdout);
> 	if ((pwent = getpwuid(user_id)) == NULL)
> 	    die(SERVER_ERROR,"couldn't determine user name from uid", stdout);
> 	if (initgroups(pwent->pw_name, group_id) == -1)
> 	    die(SERVER_ERROR,"unable to initgroups",stdout);
> 	if ((ngroups = getgroups(NGROUPS, groups)) == -1)
> 	    die(SERVER_ERROR,"couldn't save new groupids", stdout);
> 	if (setgroups(nsavegroups, savegroups) == -1)
> 	    die(SERVER_ERROR,"couldn't restore old groupids", stdout);
>     }
> 
199,201d217
<                 if ((pwent = getpwuid(user_id)) == NULL)
<                     die(SERVER_ERROR,"couldn't determine user name from uid",
<                         stdout);
203c219
<                 if (initgroups(pwent->pw_name, group_id) == -1)
---
>                 if (setgroups(ngroups, groups) == -1)




eturn-Path: farellc@io.org 
Return-Path: <farellc@io.org>
Received: from io.org by www19 (5.0/NSCS-1.0S) 
	id AA12145; Sun, 26 Feb 1995 16:55:40 +0500
Received: from hipper.net2.io.org (hipper.net2.io.org [199.43.113.60]) by io.org (8.6.9/8.6.9) with SMTP id QAA17917; Sun, 26 Feb 1995 16:55:33 -0500
Date: Sun, 26 Feb 1995 16:55:33 -0500
Message-Id: <199502262155.QAA17917@io.org>
X-Sender: farellc@io.org
X-Mailer: Windows Eudora Version 2.0.3
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
To: ses@tipper.oit.unc.edu,
        Multiple recipients of list <www-talk@www19.w3.org>
From: farellc@io.org (Cecilia Farell)
Subject: Re: Hey - it's back. 
content-length: 1555

At 10:40 PM 2/25/95 +0500, Simon E Spero wrote:
>Oh dear, it seems that mail has been lost :-(
>
>
>At least the mailing list is back.  After all the trouble at
>CERN it's amazing it came back this quickly.  I wonder if some
>disgruntled employee ripped the wires out the back of the mail
>machine.  I not sure which his worse-postal workers, or particle
>physicists...
>
>Simon // this message composed by voice
>


Other than some messsages from the www-talk list, I have not received mail
from ANY of the CERN mailing lists for about 3 weeks. What kind of trouble
has CERN been having, and does anybody know if any of the other mailing
lists are back (html, announce, etc.)?

Any answers on this would be much appreciated. I heard a very unfounded
rumour that CERN is pulling out of the Web project. I hope to God this is
not true!

Regards,

Cecilia

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                *                                          *           
           ^ ^  ^  ^^^  ^^^  ^^^  ^^^    ^   ^   ^^^  ^^^  ^   ^
           ^^^  ^  ^ ^  ^ ^  ^^   ^ ^   ^ ^ ^ ^  ^^   ^  ^ ^  ^^^
           ^ ^  ^  ^    ^    ^^^  ^  ^ ^       ^ ^^^  ^^^  ^ ^   ^

     Web Page Development * WWW and Internet Consulting * Windows Help

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             Cecilia Farell * Toronto, Canada * farellc@io.org          
   <a href="http://www.io.org/hippermedia">Hippermedia</a>  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

eturn-Path: hemang@bcpsparc.ucdavis.edu 
Return-Path: <hemang@bcpsparc.ucdavis.edu>
Received: from franc.ucdavis.edu by www19 (5.0/NSCS-1.0S) 
	id AA14200; Sun, 26 Feb 1995 18:32:56 +0500
Received: from bcpsparc.ucdavis.edu by franc.ucdavis.edu (8.6.10/UCD3.0)
	id PAA16408; Sun, 26 Feb 1995 15:32:38 -0800
Received: by bcpsparc.ucdavis.edu (4.1/UCD2.03)
	id AA07566; Sun, 26 Feb 95 15:41:36 PST
Date: Sun, 26 Feb 1995 15:41:36 -0800 (PST)
From: Hemang Patel <hemang@bcpsparc.ucdavis.edu>
To: Cecilia Farell <farellc@io.org>
Cc: Multiple recipients of list <www-talk@www19.w3.org>
Subject: Re: Hey - it's back. 
In-Reply-To: <199502262155.QAA17917@io.org>
Message-Id: <Pine.SUN.3.91.950226153857.7558C-100000@bcpsparc>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
content-length: 682

On Sun, 26 Feb 1995, Cecilia Farell wrote:

> Any answers on this would be much appreciated. I heard a very unfounded
> rumour that CERN is pulling out of the Web project. I hope to God this is
> not true!
> 

As far as I know, it is true. CERN is pulling out of Web developement. 
However, all is not lost. WWW3 is taking over what CERN is leaving 
behind. Recently, I received e-mail advising me that any links I may have 
to info.cern.ch  should be changed to www.w3.org.


__________________________________________________
Hemang Patel
Section of Molecular and Cellular Biology
Univ. of Ca. Davis
hemang@bcpsparc.ucdavis.edu
http://www-mcb.ucdavis.edu/people/hemang/home.html

eturn-Path: dale@ora.com 
Return-Path: <dale@ora.com>
Received: from rock.west.ora.com by www19 (5.0/NSCS-1.0S) 
	id AA15070; Sun, 26 Feb 1995 20:07:59 +0500
Received: by rock (8.6.10/)
From: "Dale Dougherty" <dale@ora.com>
Message-Id: <9502261707.ZM12333@rock.west.ora.com>
Date: Sun, 26 Feb 1995 17:07:38 -0800
In-Reply-To: farellc@io.org (Cecilia Farell)
        "Re: Hey - it's back." (Feb 26,  5:16pm)
References: <199502262155.QAA17917@io.org>
X-Mailer: Z-Mail (3.0.0 15dec93)
To: farellc@io.org, Multiple recipients of list <www-talk@www19.w3.org>
Subject: Re: Hey - it's back.
Content-Type: text/plain; charset=us-ascii
Mime-Version: 1.0
content-length: 490

Yes, it is true that CERN is not the home of the Web project
any longer.  It has passed to INRIA in France and MIT in the
US, who are both involved in the W3 Consortium.  If you would
like more detail, read my article in GNN's Netnews:

http://gnn.com/gnn/news/feature/inria2.html

Dale

-- 
Dale Dougherty     (dale@ora.com) 
Publisher, Global Network Navigator, http://gnn.com/ 
O'Reilly & Associates, Inc.
103A Morris Street, Sebastopol, California 95472 
(707) 829-0515; 1-800-998-9938
eturn-Path: connolly@hal.com 
Return-Path: <connolly@hal.com>
Received: from hal.com by www19 (5.0/NSCS-1.0S) 
	id AA15457; Sun, 26 Feb 1995 20:31:21 +0500
Received: from ulua.hal.com by hal.com (4.1/SMI-4.1.1)
	id AA26151; Sun, 26 Feb 95 17:31:19 PST
Received: from localhost by ulua.hal.com (4.1/SMI-4.1.2)
	id AA00354; Sun, 26 Feb 95 19:28:35 CST
Message-Id: <9502270128.AA00354@ulua.hal.com>
To: skip@automatrix.com
Cc: Multiple recipients of list <www-talk@www19.w3.org>
Subject: Re: How do I properly embed "&" in HREF? 
In-Reply-To: Your message of "Sat, 25 Feb 1995 17:19:08 +0500."
             <199502252201.RAA07014@dolphin.automatrix.com> 
Date: Sun, 26 Feb 1995 19:28:35 -0600
From: "Daniel W. Connolly" <connolly@hal.com>
content-length: 746

In message <199502252201.RAA07014@dolphin.automatrix.com>, Skip Montanaro write
s:
>
>Is their one universally acceptable (I hesitate to use the term "correct")
>way to embed "&" or other special characters in HREF attributes? 

I've raised this same issue a couple times, most recently on html-wg,
at:

http://www.acl.lanl.gov/HTML_WG/html-wg-95q1.messages/0430.html

The best answer we came up with is to write: &#38;

Some (broken) browser's don't grok, so there's no way to please
everybody today.

There was a nice folloup that gave a summary of the state of the art:

Re: Forms/CGI urls: '&' in HREFattributes
David Robinson (drtr1@cam.ac.uk)
Fri, 10 Feb 95 12:53:12 EST
http://www.acl.lanl.gov/HTML_WG/html-wg-95q1.messages/0455.html

Dan
eturn-Path: 73647.1624@compuserve.com 
Return-Path: <73647.1624@compuserve.com>
Received: from dxmint.cern.ch by www19 (5.0/NSCS-1.0S) 
	id AA22706; Mon, 27 Feb 1995 07:34:51 +0500
Received: from www0.cern.ch by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA29988; Mon, 27 Feb 1995 13:34:49 +0100
Received: from dxmint.cern.ch by www0.cern.ch (5.0/SMI-4.0)
	id AA03181; Mon, 27 Feb 1995 13:34:49 --100
Received: from dub-img-1.compuserve.com by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA29984; Mon, 27 Feb 1995 13:34:47 +0100
Received: by dub-img-1.compuserve.com (8.6.9/5.941228sam)
	id HAA06955; Mon, 27 Feb 1995 07:34:45 -0500
Date: 27 Feb 95 07:33:18 EST
From: Jyrki Poysti <73647.1624@compuserve.com>
To: LISTSERV <WWW-TALK@www0.cern.ch>
Subject: Re: Hey - its back
Message-Id: <950227123318_73647.1624_CHL41-8@CompuServe.COM>
Content-Length: 4326

>Any answers on this would be much appreciated. I heard a very unfounded
>rumour that CERN is pulling out of the Web project. 

Please see the enclosed HTML page:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

<H1><a href="http://www.cern.ch/">CERN</a> and <a href="http://www.inria.fr">INRIA</a> join forces in World-Wide Web Core
Development
</H1>

<HR>
As is well known, the <a href="http://info.cern.ch">World-Wide Web</a> (WWW) originated at <a href="http://www.cern.ch/">CERN</a>
with Tim Berners-Lee and his colleagues.  It is now the information system which is largely driving the Internet, and which has a
substantial global business potential for the near future.  The World-Wide Web was conceived as a communication tool for the widely
dispersed scientific community of High-Energy Physics.  It is destined to become essential for the Global Information
Infrastructure, and is thus a prime example of important spin-off from pure scientific research.
<P>

The Web, used by millions of people on the Internet, is in a continual process of enhancement, driven by new techniques and by
applications with differing demands.  Up to now CERN, in its pioneering role, has provided the technical reference point and
invested substantial resources in the development of WWW.  It is clear that its further development as an informatics project now
needs to be undertaken in a wider context.  In this spirit CERN has been working with the European Commission on the definition of a
project, in collaboration with parallel activities at <a href="http://web.mit.edu/">MIT</a> in the USA.
<P>

The recent <a href="http://www.cern.ch/Press/Releases94/PR16.94E_LHC-Council.html">approval</a> of the <a
href="http://www.cern.ch/CERN/LHC/LHCwelcome.html">Large Hadron Collider (LHC)</a> project implies that CERN needs to concentrate
its resources on efforts directly relevant to the future collider and its experimental programme.  CERN intends to remain a major
user of WWW, which is seen as an essential tool for the scientific community, and CERN has a continued interest in its technical
stability and evolution.  Thus CERN will continue to be involved in developments of particular interest to its community, while
envisaging a change of focus of its efforts with a corresponding reduction of involvement in more general developments.
<P>

CERN and the <a href="http://www.cordis.lu/en/home.html">European Commission</a> wish to ensure a strong European presence in WWW
development as well as a single set of standards for the technology.
<P>

<a href="http://www.inria.fr">INRIA</a> with its wide variety of advanced informatics projects and a long history of basic software
development, already contributes to many Europe-wide programmes.  It is also involved in a variety of web-related research projects
including structured document editing, content routing, multicasting browsers, and integration of Object-Oriented databases.
<P>

INRIA has played a significant role in the development of the Internet in France and Europe and is willing to assume
responsibilities in the standardisation and the promotion of the World-Wide Web.
In agreement with CERN, INRIA has therefore accepted to host the European WebCore project with funding from the Commission.  The
project will tackle such issues as:
<UL>
<LI>evolution of the Web components specifications,
<LI>development of reference code,
<LI>information services on the Web,
<LI>promotion and dissemination in Europe.
</UL>
In the early phases of this project, CERN will collaborate with INRIA, to provide a smooth transition and ensure continuity of
developments and services for the benefit of the user community.
CERN is confident that the Webcore project will play a key role in the participation of our continent in the Information Society.
<P>
<HR>
CERN, the European Laboratory for Particle Physics, has its headquarters in Geneva. At present, its Member States are Austria,
Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Netherlands, Norway, Poland, Portugal, the
Slovak Republic, Spain, Sweden, Switzerland and the United Kingdom. Israel, the Russian Federation, Turkey, Yugoslavia (status
suspended after UN embargo, June 1992), the European Commission and Unesco have observer status.

<HR>

J. Poysti 
73647.1624@compuserve.com


eturn-Path: rst@ai.mit.edu 
Return-Path: <rst@ai.mit.edu>
Received: from life.ai.mit.edu by www19 (5.0/NSCS-1.0S) 
	id AA26875; Mon, 27 Feb 1995 10:34:38 +0500
Received: from volterra (volterra.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) for www-talk@www19.w3.org id AA27321; Mon, 27 Feb 95 10:34:37 EST
From: rst@ai.mit.edu (Robert S. Thau)
Received: by volterra (4.1/AI-4.10) id AA23935; Mon, 27 Feb 95 10:34:35 EST
Date: Mon, 27 Feb 95 10:34:35 EST
Message-Id: <9502271534.AA23935@volterra>
To: kipp@cc.gatech.edu
Cc: www-talk@www19.w3.org
In-Reply-To: <199502261942.OAA03358@lennon> (message from Russel Kipp Jones on Sun, 26 Feb 1995 15:04:49 +0500)
Subject: Re: NCSA server performance patch
content-length: 4170

   From: Russel Kipp Jones <kipp@cc.gatech.edu>

   We had been experiencing considerable delay on our server connections.
   Discovered that it was the server was doing an initgroups call for
   each fork'd process.  As we use yp, and our group file keeps growing,
   AND yp is single threaded, all of the accesses were getting queued
   up waiting for ypserv.

   We tweaked the code to allow us to only do the initgroups
   call once, and use that information the remaining times.  As the
   uid/gid is always the same, this is sufficient.

FWIW, there's more in the way of speed increases where that came from,
and some of them are fairly easy to arrange for.  On each connection,
the NCSA server:

*) Talks to the nameserver --- yet another opportunity for YP to
   serialize you.  I'm not sure how to fix this *portably*, but
   cacheing the hostnames of recently seen clients in shared memory
   eases things somewhat.  Compiling with -DMINIMAL_DNS keeps the
   server from talking to the nameserver at all, and is a simpler and
   better option for those who can live with it.

*) Tries to open a whole lot of .htaccess files which aren't there.
   People running close to the edge can get around this by turning off
   the .htaccess checks entirely with an AllowOverride None at the
   right spot in access.conf.  This may be a substantial win for those
   afflicted with AFS.  (The checks for symlinks at every directory
   are also a potential source of overhead, though in that case,
   things may be better simply because the directories in question
   actually exist, and so kernel machinery like the namei cache works).

There are a few more obvious speed improvements which are harder to
arrange for (you have to change some of the server code), but the
payoffs, at least for the first listed hack below, are substantial:

*) Reads the request and MIME header from the client character by
   character, taking a context-switch into and out of the kernel on
   each.  This is a MAJOR performance hit, and easy to kludge around,
   but you do have to change the server code.  It's only mildly hard
   to fix right.

*) Opens the locale database to find out the names of the months, and
   opens some other file to find the time zone.  (Actually, the C
   library does this behind httpd's back, but the effect is the same).

   I got rid of this overhead by doing a few dummy time conversions
   before starting to listen on the socket --- this initializes the C
   library time-conversion code in the parent process, and so the
   children don't have to do it themselves after the fork().

I've fixed most of the above in the server I'm running (all except
.htaccess files, which require some code cleanup to get right), and
that gets you close to the end of the line --- much improvement beyond
that will probably come only by eliminating the fork on every
transaction.  (The overhead of fork() is difficult to measure directly,
but it shows up indirectly in some of my other measurements, and it
seems to be large).

That's only after some cleanups, though --- the standard NCSA server
spends most of its time figuring out what groups "nobody" is in, over
and over and over...

rst

PS --- your patch has a *very* minor bug --- if the server rereads the
       config files, it doesn't change the group info, even though
       User might have changed in httpd.conf, and the appropriate
       groups might have changed in any event.  This is never likely
       to come up in practice, but I'm a little compulsive about these
       things.

[1] If you really want to do this right, at least on SunOS, you can
    recv the header with MSG_PEEK instead of reading it, and then only
    read those bytes out of the kernel buffers which actually contain
    the header, leaving the rest for a CGI script.  This handles POST
    right, as well as GET.  David Robinson came up with this idea and
    has actually coded it up.  Or, you could do as the CERN server does
    --- buffer the client socket as usual, and then pipe the contents
    of the buffer to any script that wants to see them, but that's more
    work starting from the existing NCSA code.

eturn-Path: zhang@welchgate.welch.jhu.edu 
Return-Path: <zhang@welchgate.welch.jhu.edu>
Received: from welchgate.welch.jhu.edu by www19 (5.0/NSCS-1.0S) 
	id AA27992; Mon, 27 Feb 1995 11:30:13 +0500
Received: by welchgate.welch.jhu.edu (4.1/1.34)
	id AA26345; Mon, 27 Feb 95 11:24:26 EST
From: zhang@welchgate.welch.jhu.edu (Dongming Zhang)
Message-Id: <9502271624.AA26345@welchgate.welch.jhu.edu>
Subject: Z3950 and netscape.
To: www-talk@www19.w3.org
Date: Mon, 27 Feb 95 11:24:23 EST
X-Mailer: ELM [version 2.3 PL11]
content-length: 228


   Greeting!

   I have been trying to put z3950 client into Netscape, i.e. on MIME
part. But seemed doesn't work. Does anybody have an experience on
MIME?

   Thanks in advance.

   Dongming Zhang  (Welch, Johns Hopkins, USA)
eturn-Path: PAULA-W@aci1.aci.ns.ca 
Return-Path: <PAULA-W@aci1.aci.ns.ca>
Received: from Owl.nstn.ca (owl.nstn.ns.ca) by www19 (5.0/NSCS-1.0S) 
	id AA01949; Mon, 27 Feb 1995 14:13:43 +0500
Received: from mercury.aci.ns.ca (aci1.aci.ns.ca [192.75.64.34]) by Owl.nstn.ca (8.6.9/8.6.6) with SMTP id PAA06697 for <www-talk@www19.w3.org>; Mon, 27 Feb 1995 15:13:41 -0400
Received: from ACI_1/SpoolDir by mercury.aci.ns.ca (Mercury 1.13);
    Mon, 27 Feb 95 15:11:25 ADT
Received: from SpoolDir by ACI_1 (Mercury 1.13); Mon, 27 Feb 95 15:11:01 ADT
From: "PAULA WILSON" <PAULA-W@aci1.aci.ns.ca>
To: www-talk@www19.w3.org
Date:          Mon, 27 Feb 1995 15:10:54 ADT
Subject:       Linux&HTTPD
Priority: normal
X-Mailer:     Pegasus Mail/Windows (v1.11a)
Message-Id: <2FBAC60A4E@mercury.aci.ns.ca>
content-length: 437

Hi
I hope this is the correct forum for this question.
Can any one tell me how many people at one time can 
access a Web site with the following setup:
Linux ver 1.0.9
NCSA HTTPD ver 1.3
386SX with 16 MB RAM
 None of the software will give me this information so I would 
be grateful for your assistance.
 
---------------------------
Paula Wilson
Atlantic Computer Institute
5523 Spring Garden Rd.
Suite 201
Halifax
Nova Scotia
B3J 3T1
eturn-Path: farellc@io.org 
Return-Path: <farellc@io.org>
Received: from io.org by www19 (5.0/NSCS-1.0S) 
	id AA03612; Mon, 27 Feb 1995 14:52:16 +0500
Received: from hipper.net2.io.org (hipper.net2.io.org [199.43.113.60]) by io.org (8.6.9/8.6.9) with SMTP id OAA29155 for <www-talk@www19.w3.org>; Mon, 27 Feb 1995 14:52:11 -0500
Date: Mon, 27 Feb 1995 14:52:11 -0500
Message-Id: <199502271952.OAA29155@io.org>
X-Sender: farellc@io.org (Unverified)
X-Mailer: Windows Eudora Version 2.0.3
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
To: www-talk@www19.w3.org
From: farellc@io.org (Cecilia Farell)
Subject: Transfer from CERN to INRIA
content-length: 1394

Thanks to all who sent out info on the transfer from CERN to INRIA. It was
rather reassuring.

It appears, then, that the W3 Consortium now provides the Web documents
previously stored at CERN. Consequently, all references to info.cern.ch
should be changed to www.w3.org.

It also appears that the consortium is taking over the CERN mailing lists,
with the mailing address for sending subscription, etc. commands being
listproc@mail.w3.org and the mailing list for www-talk now being
www-talk@www19.w3.org.

However, I am only receiving mail from the www-talk list. Anyone have any
ideas about the status of the announce, html, security and rdb lists?

Thanks again to all,

Regards,

Cecilia

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                *                                          *           
           ^ ^  ^  ^^^  ^^^  ^^^  ^^^    ^   ^   ^^^  ^^^  ^   ^
           ^^^  ^  ^ ^  ^ ^  ^^   ^ ^   ^ ^ ^ ^  ^^   ^  ^ ^  ^^^
           ^ ^  ^  ^    ^    ^^^  ^  ^ ^       ^ ^^^  ^^^  ^ ^   ^

     Web Page Development * WWW and Internet Consulting * Windows Help

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
             Cecilia Farell * Toronto, Canada * farellc@io.org          
   <a href="http://www.io.org/hippermedia">Hippermedia</a>  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

eturn-Path: reimann@access.digex.net 
Return-Path: <reimann@access.digex.net>
Received: from nfs1.digex.net by www19 (5.0/NSCS-1.0S) 
	id AA05947; Mon, 27 Feb 1995 15:51:20 +0500
Received: by nfs1.digex.net id AA05651
  (5.67b8/IDA-1.5 for www-talk@www19.w3.org); Mon, 27 Feb 1995 15:51:16 -0500
Date: Mon, 27 Feb 1995 15:51:16 -0500
From: Carl Reimann <reimann@access.digex.net>
Message-Id: <199502272051.AA05651@nfs1.digex.net>
To: www-talk@www19.w3.org
References: <199502272050.AA05617@nfs1.digex.net>
In-Reply-To: <199502272050.AA05617@nfs1.digex.net>
Subject: Thank you.
Reply-To: reimann@access.digex.net
content-length: 1599


   Greeting!

   I have been trying to put z3950 client into Netscape, i.e. on MIME
part. But seemed doesn't work. Does anybody have an experience on
MIME?

   Thanks in advance.

   Dongming Zhang  (Welch, Johns Hopkins, USA)

----------------------------------------------------------------------------
Greetings! I have constructed this automatic message to get people fast
responses. I do not intend it to be a substitute for a mutually productive
relationship. I receive a great deal of mail since my work toward improving
higher education through the creation and use of electronic resources has
been featured in various books and magazine articles.

There may be a quick answer to your message if it is similar to ones I
commonly get.  Here's how to get some fast insight, even before my reply. 
I'll be getting back to you very soon.  Thanks for your understanding and
willingness to accept this automatic message.

For information about:		Send me e-mail w/ subject line:

* Higher Education Database		info database
* Higher Education WWW Archives		info archives
	accessing WWW				how do I access www
* (Higher) Education Mailing Lists	info mailing lists
	leaving a list				how do I leave
	joining a list				how do I join
* Higher Education Project Assistance	info project assistance
  All of the (*) above			info all resources

----------------------------------------------------------------------------
Carl Reimann
reimann@access.digex.net
http://www.access.digex.net/~reimann   <-- archives are located here
----------------------------------------------------------------------------
eturn-Path: houser@cpcug.org 
Return-Path: <houser@cpcug.org>
Received: from cpcug.org by www19 (5.0/NSCS-1.0S) 
	id AA08568; Mon, 27 Feb 1995 17:40:19 +0500
Received: from houser.cpcug.org by cpcug.org with SMTP id AA17448
  (5.67b8/IDA-1.5 for <www-talk@mail.w3.org>); Mon, 27 Feb 1995 17:40:12 -0500
Message-Id: <199502272240.AA17448@cpcug.org>
X-Sender: houser@cpcug.org
X-Mailer: Windows Eudora Version 1.4.4
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Mon, 27 Feb 1995 18:16:32 -0500
To: www-talk@www19.w3.org, PAULA-W@aci1.aci.ns.ca
From: houser@cpcug.org (Walter Houser)
Subject: Re: Linux&HTTPD
content-length: 1240

IMHO the bottleneck is likely the link speed and nature of the connection.
You probably are using SLIP or PPP connection with a 14.4 modem with Van
Jacobson compression, thereby getting 38,800 to 56,000 bps.   Such a setup
is slow for a browser platform; it must be excrutiating for those who
browsing a server running under those conditions.  The bottom line for the
provider (as well as the browser) is that you can't have enough bandwidth
from the Internet to your PC.  At the least get a 28.8 V34 modem (NOT VFC)
and V34 service from your Internet Service Provider.  

>Hi
>I hope this is the correct forum for this question.
>Can any one tell me how many people at one time can 
>access a Web site with the following setup:
>Linux ver 1.0.9
>NCSA HTTPD ver 1.3
>386SX with 16 MB RAM
> None of the software will give me this information so I would 
>be grateful for your assistance.
> 
>---------------------------
>Paula Wilson
>Atlantic Computer Institute
>5523 Spring Garden Rd.
>Suite 201
>Halifax
>Nova Scotia
>B3J 3T1
>
>

Walt Houser                                            301-622-4384 (home)
houser@cpcug.org                                       202-786-9572 (office)
There are no accidents, just unintended consequences.

eturn-Path: cbrenton@digprod.com 
Received: from www10.lcs.mit.edu (www10.w3.org) by www19 (5.0/NSCS-1.0S) 
	id AA10622; Mon, 27 Feb 1995 19:22:26 +0500
Received: from netprint.digprod.com by www10.lcs.mit.edu (5.0/NSCS-1.0S) 
	id AA26051; Mon, 27 Feb 1995 19:22:24 +0500
Received: from smtplink.digprod.com by netprint.digprod.com with SMTP (5.65/1.2-eef)
	id AA06997; Mon, 27 Feb 95 17:50:53 -0500
Return-Path: <cbrenton@digprod.com>
Received: from cc:Mail by smtplink.digprod.com
	id AA793941902 Mon, 27 Feb 95 19:25:02 EST
Date: Mon, 27 Feb 95 19:25:02 EST
From: cbrenton@digprod.com (Brenton, Chris)
Encoding: 4572 Text
Message-Id: <9501277939.AA793941902@smtplink.digprod.com>
To: www-talk@www10.w3.org, cbrenton@digprod.com
Subject: Re: No subject given
Content-Length: 4479

     


______________________________ Reply Separator _________________________________
Subject: No subject given
Author:  www-talk@www10.w3.org at Internet
Date:    2/27/95 5:25 PM


   From: Russel Kipp Jones <kipp@cc.gatech.edu>
     
   We had been experiencing considerable delay on our server connections. 
   Discovered that it was the server was doing an initgroups call for 
   each fork'd process.  As we use yp, and our group file keeps growing, 
   AND yp is single threaded, all of the accesses were getting queued
   up waiting for ypserv.
     
   We tweaked the code to allow us to only do the initgroups
   call once, and use that information the remaining times.  As the 
   uid/gid is always the same, this is sufficient.
     
FWIW, there's more in the way of speed increases where that came from, 
and some of them are fairly easy to arrange for.  On each connection, 
the NCSA server:
     
*) Talks to the nameserver --- yet another opportunity for YP to
   serialize you.  I'm not sure how to fix this *portably*, but 
   cacheing the hostnames of recently seen clients in shared memory 
   eases things somewhat.  Compiling with -DMINIMAL_DNS keeps the 
   server from talking to the nameserver at all, and is a simpler and 
   better option for those who can live with it.
     
*) Tries to open a whole lot of .htaccess files which aren't there.
   People running close to the edge can get around this by turning off 
   the .htaccess checks entirely with an AllowOverride None at the 
   right spot in access.conf.  This may be a substantial win for those 
   afflicted with AFS.  (The checks for symlinks at every directory 
   are also a potential source of overhead, though in that case, 
   things may be better simply because the directories in question
   actually exist, and so kernel machinery like the namei cache works).
     
There are a few more obvious speed improvements which are harder to 
arrange for (you have to change some of the server code), but the 
payoffs, at least for the first listed hack below, are substantial:
     
*) Reads the request and MIME header from the client character by
   character, taking a context-switch into and out of the kernel on 
   each.  This is a MAJOR performance hit, and easy to kludge around, 
   but you do have to change the server code.  It's only mildly hard 
   to fix right.
     
*) Opens the locale database to find out the names of the months, and
   opens some other file to find the time zone.  (Actually, the C 
   library does this behind httpd's back, but the effect is the same).
     
   I got rid of this overhead by doing a few dummy time conversions 
   before starting to listen on the socket --- this initializes the C 
   library time-conversion code in the parent process, and so the 
   children don't have to do it themselves after the fork().
     
I've fixed most of the above in the server I'm running (all except 
.htaccess files, which require some code cleanup to get right), and 
that gets you close to the end of the line --- much improvement beyond 
that will probably come only by eliminating the fork on every 
transaction.  (The overhead of fork() is difficult to measure directly, 
but it shows up indirectly in some of my other measurements, and it 
seems to be large).
     
That's only after some cleanups, though --- the standard NCSA server 
spends most of its time figuring out what groups "nobody" is in, over 
and over and over...
     
rst
     
PS --- your patch has a *very* minor bug --- if the server rereads the
       config files, it doesn't change the group info, even though 
       User might have changed in httpd.conf, and the appropriate 
       groups might have changed in any event.  This is never likely 
       to come up in practice, but I'm a little compulsive about these 
       things.
     
[1] If you really want to do this right, at least on SunOS, you can
    recv the header with MSG_PEEK instead of reading it, and then only 
    read those bytes out of the kernel buffers which actually contain 
    the header, leaving the rest for a CGI script.  This handles POST 
    right, as well as GET.  David Robinson came up with this idea and 
    has actually coded it up.  Or, you could do as the CERN server does 
    --- buffer the client socket as usual, and then pipe the contents 
    of the buffer to any script that wants to see them, but that's more 
    work starting from the existing NCSA code.
     
     
eturn-Path: chi@nb.rockwell.com 
Return-Path: <chi@nb.rockwell.com>
Received: from www10.lcs.mit.edu (www10.w3.org) by www19 (5.0/NSCS-1.0S) 
	id AA12401; Mon, 27 Feb 1995 21:01:09 +0500
Received: from gate.nb.rockwell.com by www10.lcs.mit.edu (5.0/NSCS-1.0S) 
	id AA26318; Mon, 27 Feb 1995 21:01:07 +0500
Received: by gate.nb.rockwell.com (5.57/Ultrix3.0-C)
	id AA22087; Mon, 27 Feb 95 17:57:16 -0800
Received: from tahoe.nb.rockwell.com.dcdnis by atlas.nb.rockwell.com (4.1/SMI-4.1)
	id AA04908; Mon, 27 Feb 95 18:01:01 PST
Date: Mon, 27 Feb 95 18:01:01 PST
From: chi@nb.rockwell.com (Calvin Chi)
Message-Id: <9502280201.AA04908@atlas.nb.rockwell.com>
To: www-talk@www10.w3.org
Subject: Secured WWW server? Comparison?
Cc: chi@nb.rockwell.com
Content-Length: 424

Hi,
   
   We are currently considering purchase and set up a secured
WWW server so we can pass sensitive documents back and forth.
My questions are

   a. "Are there secured servers, other then Netscape Commerce, out there?"

   b. "Has anyone done any comparison between these servers?"
       functionality? cost? encryption mechanism used? etc...

   Thanks in advance.

 
Calvin Chi              


chi@nb.rockwell.com
eturn-Path: roeber@cern.ch 
Return-Path: <roeber@cern.ch>
Received: from dxmint.cern.ch by www19 (5.0/NSCS-1.0S) 
	id AA18084; Tue, 28 Feb 1995 06:55:35 +0500
Received: from www0.cern.ch by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA06341; Tue, 28 Feb 1995 12:55:32 +0100
Received: from dxmint.cern.ch by www0.cern.ch (5.0/SMI-4.0)
	id AA07639; Tue, 28 Feb 1995 12:55:31 --100
Received: from ptsun03 by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA06329; Tue, 28 Feb 1995 12:55:29 +0100
Message-Id: <9502281155.AA06329@dxmint.cern.ch>
From: "Frederick G.M. Roeber" <roeber@cern.ch>
Date: Tue, 28 Feb 95 12:57:41 100
Sender: roeber@dxmint.cern.ch
To: www-talk@www0.cern.ch
Mime-Version: 1.0
X-Mailer: Mozilla/1.0N (X11; SunOS 4.1.1 sun4c)
Content-Type: text/plain;  charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Subject: Re: Hey - it's back.
X-Url: news:A5AA@cernvm.cern.ch
Content-Length: 2298

>Oh dear, it seems that mail has been lost :-(

Actually, it's all there; occupying some 36 megabytes on one
of my machines...  but I don't know if Arthur plans on
dumping it into the queue.  There are probably a lot of resends..

>At least the mailing list is back.  After all the trouble at
>CERN it's amazing it came back this quickly.  I wonder if some
>disgruntled employee ripped the wires out the back of the mail
>machine.  

This had nothing to do with the PS incident.  "An Incident"
(can't say much) happened here a few weeks ago, after which we
had to reinstall the operating systems on our various WWW servers.
I got the main services going (http, telnet-access, ftp, etc.); 
the person doing the mail services (listproc, Agora) got Agora 
going here, but then went to MIT to properly install everything 
there.  So this list is actually being served from the US now.

(Oh, and the PS problem isn't as bad as initially feared -- maybe
only a couple weeks delay to startup, and maybe the physics program
won't be impacted at all!  Lotsa overtime being done..)

> I not sure which his worse-postal workers, or particle 
> physicists...

Particle physicists are wonderful, charming people.  The guy
who damaged the PS was a technician, I believe.

> Other than some messsages from the www-talk list, I have not received mail
> from ANY of the CERN mailing lists for about 3 weeks. What kind of trouble
> has CERN been having, and does anybody know if any of the other mailing
> lists are back (html, announce, etc.)?

The www-talk list is gated at CERN to a "local" newsgroup, cern.www.talk,
which many other sites (including some big commercial ones) pick up.  
There've been some people using it as a regular newsgroup.

Regarding the restoration of service at MIT: I suspect www-talk was the 
second priority (after Agora), but the other lists will probably all be
served from the same machine by the same software, so I'd bet the others
will be back soon.  (www-announce might need a moderator, though.)  In 
any case, I'm sure Arthur will post a message when everything is going
perfectly.

> I heard a very unfounded rumour that CERN is pulling out of the Web 
> project. I hope to God this is not true!

It's true.  This is my last day!
--
<a href="nowhere"><i>Frederick.</i></a>

eturn-Path: Prasad.Wagle@eng.sun.com 
Return-Path: <Prasad.Wagle@eng.sun.com>
Received: from dxmint.cern.ch by www19 (5.0/NSCS-1.0S) 
	id AA02693; Tue, 28 Feb 1995 20:35:39 +0500
Received: from www0.cern.ch by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA25502; Wed, 1 Mar 1995 02:35:36 +0100
Received: from dxmint.cern.ch by www0.cern.ch (5.0/SMI-4.0)
	id AA00775; Wed, 1 Mar 1995 02:35:31 --100
Received: from Sun.COM by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
	id AA25486; Wed, 1 Mar 1995 02:35:27 +0100
Received: from Eng.Sun.COM (engmail2.Eng.Sun.COM) by Sun.COM (sun-barr.Sun.COM)
	id AA19119; Tue, 28 Feb 95 17:28:31 PST
Received: from haven.eng.sun.com by Eng.Sun.COM (5.x/SMI-5.3)
	id AA10470; Tue, 28 Feb 1995 17:28:27 -0800
Received: by haven.eng.sun.com (5.0/SMI-SVR4)
	id AA20570; Tue, 28 Feb 1995 17:26:12 +0800
Date: Tue, 28 Feb 1995 17:26:12 +0800
From: Prasad.Wagle@eng.sun.com (Prasad Wagle)
Message-Id: <9503010126.AA20570@haven.eng.sun.com>
To: www-speed@tipper.oit.unc.edu, www-talk@www0.cern.ch, specsfs@dg-rtp.dg.com
Subject: Industry Standard HTTP Server Benchmark Based on SPEC SFS (LADDIS)
X-Sun-Charset: US-ASCII
Content-Length: 3932


I have converted LADDIS (industry standard SPEC benchmark to measure
NFS performance) to an HTTP server benchmark. Currently it uses a very
elementary workload (one URL). The results of this benchmark are
included at the end of this message.

The advantages of this benchmark are:
- it is a multiclient benchmark
- the client side implementation does not influence benchmark results
  which is the way it should be for a server benchmark
The disadvantages of this benchmark are:
- the current workload is not realistic

I need help to:
- Review benchmark methodology/implementation
- Create a realistic workload (or maybe different workloads for
  different environments)

LADDIS was originally developed by six vendors who saw the need for
better NFS benchmarks (Legato, Auspex, DEC, Data General, Interphase,
and Sun).  The original work group then took the benchmark to SPEC for
further development and promotion as an industry standard.  LADDIS has
considerably helped performance evaluation of NFS servers, thereby
contributing to the development of better servers.

I want to make the same thing happen for HTTP servers.  It's important
that such work not be done by any single vendor, nor with any single
narrow viewpoint of the requirements.  Would people in this group be
interested in working together to create an industry standard benchmark
for HTTP servers?

I would like to thank the SMCC Performance Engineering Group for their
support in this effort.

Regards,
Prasad

Note: this table illustrates the type of output generated by this
benchmark.  The actual numbers aren't meaningful due to the dummy
workload and uncontrolled test environment.

			Benchmark Results
************************************************************************

Aggregate Test Parameters:
    Number of processes = 1
    Requested Load (HTTP operations/second) = 10
    Warm-up time (seconds) = 1
    Run time (seconds) = 120
Aggregate Results for 1 Client(s), Tue Feb 28 13:37:34 1995
HTTP Server Benchmark Version 1, Creation - 15 February 1995
--------------------------------------------------------------------------
HTTP    Target Actual     HTTP   HTTP   Mean    Std Dev  Std Error   Pcnt
Op       HTTP   HTTP      Op     Op    Response Response of Mean,95%  of
Type     Mix    Mix     Success Error   Time     Time    Confidence  Total
         Pcnt   Pcnt     Count  Count  Msec/Op  Msec/Op  +- Msec/Op  Time
--------------------------------------------------------------------------
get        80%   82.0%        32     0  3271.62   822.29      9.93     85.9%
head       10%    5.1%         2     0  2450.00    28.69      7.42      4.0%
post       10%   12.8%         5     0  2437.80    43.81      5.80     10.0%
put         0%    0.0%         0     0     0.00     0.00      0.00      0.0%
delete      0%    0.0%         0     0     0.00     0.00      0.00      0.0%
checkout    0%    0.0%         0     0     0.00     0.00      0.00      0.0%
checkin     0%    0.0%         0     0     0.00     0.00      0.00      0.0%
showmethod  0%    0.0%         0     0     0.00     0.00      0.00      0.0%
link        0%    0.0%         0     0     0.00     0.00      0.00      0.0%
unlink      0%    0.0%         0     0     0.00     0.00      0.00      0.0%
--------------------------------------------------------------------------
INVALID RUN reported for Client 1 (haven).

        --------------------------------------------------------
        | AGGREGATE RESULTS SUMMARY                            | 
        --------------------------------------------------------
HTTP THROUGHPUT:       0 Ops/Sec   AVG. RESPONSE TIME:  3122.6 Msec/Op
HTTP MIXFILE: [ default ]
AGGREGATE REQUESTED LOAD: 10 Ops/Sec
TOTAL HTTP OPERATIONS:     39      TEST TIME: 119 Sec
NUMBER OF CLIENTS: 1

------------------------------------------------------------------------

************************************************************************
Received on Saturday, 25 February 1995 22:01:00 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:16 GMT