Re: libww pipelining bug?

Vaclav,

HTWriter_write is called eventually from PUTC, PUTS and PUTBLOCK functions
defined in HTTPReq.c and HTTPGen.c.
You can stop in HTWriter_write at the line where the broken pipe is handled and
you will see the invocation stack.  I am not sure that fixing that will help
you.  For me it worked when I had problems loading documents from secure server
through secure proxy (SSL). Checking the return code for each write would slow
down the execution in normal cases though. In the original HTWriter_write the
HT_CLOSED not even returned in case of broken pipe. HT_ERROR returned but it is
in most cases not checked.

My code in HTTPManeRequest looks something like:

  /* Generate the HTTP/1.x RequestLine */
    if (me->state == 0) {
        if (method != METHOD_INVALID) {
    
            /* olga for https proxy */
            ret = PUTS(HTMethod_name(method));
            if (ret == HT_CLOSED || ret == HT_RESTART_PROXY_REQUEST)
              return ret;
            
            ret = PUTC(' ');
            if (ret == HT_CLOSED || ret == HT_RESTART_PROXY_REQUEST)
              return ret;
        } else { 
            ret == PUTS("GET ");
            if (ret == HT_CLOSED || ret == HT_RESTART_PROXY_REQUEST)
              return ret;
        }
        
                me->state++;
    }

......  and so on ... I also put changes into the functions to which that
runction returns (and so on along the invocation stack) as in some of them
return value is not checked either.


Olga.

On 30-Sep-99 Vaclav Barta wrote:
> Olga Antropova wrote:
>> 
>> Hi Mikhail,
>> 
>> Is that the requests are not failing but hanging unhandled in pipeline?
>> Trace may help. I used the plocking sockets and in some cases if one request
>> fails then all the subsequent requests to the same host would fail. I
> Yes, I've seen that too. You can reproduce it quite easily if you have
> lots (let's say a hundred) URLs registered for download and HText
> handlers take long to finish. :-( I worked around it by registering only
> a limited number of URLs and retrying HTEventList_newLoop() in *my* loop
> as long as there are URLs finished with status -1. It helped, but it's
> ugly, inefficient (some pages are downloaded only to be thrown away) and
> dangerous (if there's some other problem, it will cycle forever). OTOH,
> I don't understand the problem well enough to fix it... :-(
> 
>> - you can see that several functions do not check the return value of
>> HTWriter_write and just continue (to write on closed socket). That
>> happens in HTTPReq.c an d HTTPGen.c. In most cases the HT_CLOSED return
>> will be eventually checked in consecutive writes (and HTHost_recoverPipe
>> be called later) but not in all.
> Well, could somebody repair it? I can't even find calls to
> HTWriter_write in HTTPReq.c and HTTPGen.c, so I probably shouldn't...
> 
>       Bye
>               Vasek
> --
> I have a web spider, too!
> http://www.locus.cz/linkcheck/

Received on Thursday, 30 September 1999 15:03:39 UTC