Re: libww pipelining bug?

Olga Antropova wrote:
> 
> Hi Mikhail,
> 
> Is that the requests are not failing but hanging unhandled in pipeline?
> Trace may help. I used the plocking sockets and in some cases if one request
> fails then all the subsequent requests to the same host would fail. I
Yes, I've seen that too. You can reproduce it quite easily if you have
lots (let's say a hundred) URLs registered for download and HText
handlers take long to finish. :-( I worked around it by registering only
a limited number of URLs and retrying HTEventList_newLoop() in *my* loop
as long as there are URLs finished with status -1. It helped, but it's
ugly, inefficient (some pages are downloaded only to be thrown away) and
dangerous (if there's some other problem, it will cycle forever). OTOH,
I don't understand the problem well enough to fix it... :-(

> - you can see that several functions do not check the return value of
> HTWriter_write and just continue (to write on closed socket). That
> happens in HTTPReq.c an d HTTPGen.c. In most cases the HT_CLOSED return
> will be eventually checked in consecutive writes (and HTHost_recoverPipe
> be called later) but not in all.
Well, could somebody repair it? I can't even find calls to
HTWriter_write in HTTPReq.c and HTTPGen.c, so I probably shouldn't...

	Bye
		Vasek
--
I have a web spider, too!
http://www.locus.cz/linkcheck/

Received on Thursday, 30 September 1999 13:07:31 UTC