- From: Gary Adams - Sun Microsystems Labs BOS <Gary.Adams@east.sun.com>
- Date: Mon, 26 Jun 1995 09:14:54 -0400
- To: payne@openmarket.com, brian@organic.com
- Cc: www-talk@www10.w3.org
Defining an API for CLF is a great idea! The current text
file format provides a decent baseline for "current practice".
e.g. from wwwstats utility script :
($host, $rfc931, $authuser, $timestamp, $request, $status, $bytes)
The proposed tagged-field log records (payne@openmarket.com)
represent a reasonable first cut at a message to be logged.
log {start 803173054.917815} {method GET} {url /~payne/link.html} \
{bytes 0} {error {file not found}} {status 404} {end 803173054.930446} \
{host localhost}
By concentrating on an API or a set of messages acceptable to a
logging daemon, the back end storage could be a simple text file or
a more powerful database engine. The ability to use SQL for report
generation would open the data up to better manipulation and analysis
tools.
I can think of three circumstances where this approach could save
some effort in maintaining servers.
1. We augmented an http server for proxy and caching (before they
were so prevalent) and attached the caching statistics at the end
of the standard CLF record.
2. We routinely run multiple http daemons on the same file server
on different ports. A combined log file would make it easier
to generate a simgle report of the servers activities (albeit
sorting multiple CLF text files together serves the same function).
3. There are thousands of http servers running on our internal network.
A network API for logging all of these server accesses could provide
dynamic maps of information flow throughout the organization.
$.02
Received on Monday, 26 June 1995 09:14:31 UTC