W3C home > Mailing lists > Public > www-lib@w3.org > July to September 1999

Re: libww pipelining bug?

From: Vaclav Barta <vbar@comp.cz>
Date: Thu, 30 Sep 1999 11:18:37 +0000
Message-ID: <37F3470D.1F25D035@comp.cz>
To: www-lib@w3.org
Olga Antropova wrote:
> Hi Mikhail,
> Is that the requests are not failing but hanging unhandled in pipeline?
> Trace may help. I used the plocking sockets and in some cases if one request
> fails then all the subsequent requests to the same host would fail. I
Yes, I've seen that too. You can reproduce it quite easily if you have
lots (let's say a hundred) URLs registered for download and HText
handlers take long to finish. :-( I worked around it by registering only
a limited number of URLs and retrying HTEventList_newLoop() in *my* loop
as long as there are URLs finished with status -1. It helped, but it's
ugly, inefficient (some pages are downloaded only to be thrown away) and
dangerous (if there's some other problem, it will cycle forever). OTOH,
I don't understand the problem well enough to fix it... :-(

> - you can see that several functions do not check the return value of
> HTWriter_write and just continue (to write on closed socket). That
> happens in HTTPReq.c an d HTTPGen.c. In most cases the HT_CLOSED return
> will be eventually checked in consecutive writes (and HTHost_recoverPipe
> be called later) but not in all.
Well, could somebody repair it? I can't even find calls to
HTWriter_write in HTTPReq.c and HTTPGen.c, so I probably shouldn't...

I have a web spider, too!
Received on Thursday, 30 September 1999 13:07:31 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:33:52 UTC