W3C home > Mailing lists > Public > www-lib@w3.org > July to September 2000

real time problem with libwww5.2.8

From: Francois Nicot <fnicot@silicom.fr>
Date: Tue, 04 Jul 2000 14:43:56 +0200
Message-ID: <3961DC0C.88E2F146@silicom.fr>
To: "www-lib@w3.org" <www-lib@w3.org>
hi all,

I face a Real-Time problem that occurs only when I crawl an intranet
(obviously faster than the Web).

the facts:

I am developing a web robot  with libwww5.2.8 on solaris 2.7.

I use the event loop as shown in the w3c sample webbot (request/response
are processed asynchronously as they come).

Requests are sent in non preemptive mode.

To increase efficiency I load the request manager  (HTLoadAnchor) with
some requests (about 50 ) built thanks to a list of url .
At that step I still haven't enabled the event loop
(HTEventList_newLoop()). Requests should be kept in the request manager
until the number of data exceeds the limit or the timout expires .

but ,  It looks like requests are issued immediatly by the request
manager as soon as it handles them and the core has switched to another
thread.  In this case I never executes codes after the the first
HTLoadAnchor()  as the response is already received. Thus remaining url
in the list are not processed and my robot stops prematurely.
This is why I think it is a RT probleme.

I have this problem on our intranet when machines are not very busy .
Never on the web.

I would like to know if any one already faced it, or has found a
workaround or if I am wrong in my analysis.

Thanks a lot.

Francois Nicot.

Received on Tuesday, 4 July 2000 08:40:31 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:33:53 UTC