W3C home > Mailing lists > Public > www-talk@w3.org > November to December 1996

Re: CGIs to manage backlink information

From: Ka-Ping Yee <kryee@calum.csclub.uwaterloo.ca>
Date: Wed, 18 Dec 1996 03:34:58 +0900
Message-ID: <32B6E7D2.68C0C44C@csclub.uwaterloo.ca>
To: Alejandro Rivero <rivero@sol.unizar.es>
CC: Ka-Ping Yee <kryee@wheat.uwaterloo.ca>, www-talk@w3.org
Hi there again.

Alejandro Rivero wrote:
> In fact I included your node in the
> references of my paper (PAPER224.html). Of every the backlinks projects I
> revised, it is the the one I like more.

Oh, thanks.  I didn't realize.

> Althougth, I didn't found in your pages any technical
> explanation, so I only could guess. I supossed you do some
> post-process of the referer log, as to read the referer html is
> a bit time consuming.

Yes, it is.  The program goes through the referer log and compares
those links to the ones it already has, and attempts to retrieve
links from the Web one by one.  Based on whether the retrieval
succeeds and whether the page actually contains a backlink, the
referers are moved among the five categories of confidence.

> Note that I can not do such check in my
> method, as in works on real time, and if we open a internet
> socket for each hit we got, our machine can be saturated two times
> faster than with normal use.

True.  But even when i run the script periodically, it takes a
long time to do processing.  I *wish* people didn't all use broken
browsers!  If only we could trust the browser not to lie about
referers all the time, all that work would be unnecessary.

       3B Computer Engineering, Waterloo (on exchange in Tottori, Japan)
http://www.lfw.org/math/ brings math to the Web as easy as <se>?pi?</se>
Received on Tuesday, 17 December 1996 13:40:50 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:32:59 UTC