WWWVL: gathering stats (usage)
Subject: WWWVL: gathering stats (usage)
From: email@example.com (Keith Instone)
Date: Wed, 7 Dec 1994 09:09:48 -0500
Here is a proposal, off the top of my head, and
probably not the best way to do it:
At the same level as the "home page" for each area
of the Virtual Library exists a file called "stats".
The URL for this file can be easily computing by looking
at the URL for the home page. The URL for ALL of the
stats can easily be accessed by scanning thru the main
WWW VL page itself.
So, cern runs a program nightly that gathers up info
from all of the stats files and generates the appropriate
(sorted) HTML files as an optional way to see the topics
in the VL.
Usage should be one of these stats. Probably averge number
of downloads of the home page per day (take the weekly total
and divide by seven, so we don't have weekday/weekend problems).
Later, usage will probably have to be listed in terms of HOURS,
but we can deal with that when the time comes.
Other stats could be # of links and # of megabytes, perhaps. For
each stat, the cern program would generate a different variation
of the VL list of topics. Even fancier stats can be dreamed up, such
as the number-of-links-traversed (since the real usefulness of a
home page full of links is not who downloads it, but who finds
something worth jumping to from it).
The stats file should be really simple, so that someone can edit it by
hand, and so that different platformns can easily write programs
to generate it automatically. Maybe something like:
The stat collection program sounds pretty easy to write: grab all
of the HREFs from the WWW VL page, modify the URL for the stats file,
use url_get to grab the file, parse the file for each stat, add the
stat to a big file contain that stat for each of the topics,
sort each file and slap the right HTML at the beginning and the end,
and you are done.