W3C home > Mailing lists > Public > www-lib@w3.org > October to December 2001

Re: Broken pipes & lost requests

From: Nelson Spessard <nspessard@yahoo.com>
Date: Mon, 17 Dec 2001 20:24:36 -0800 (PST)
Message-ID: <20011218042436.9551.qmail@web13401.mail.yahoo.com>
To: Azzurra Pantella <azzurra.pantella@netikos.com>
Cc: www-lib@w3.org
Ok thanks for getting back to me.  Let me tell you
what I am seeing and what I am trying to do... 
I believe that I have identified part of my problem
but not completely all of the issues.

The Segmentation fault is occurring in
terminate_handler.  (As I am new to this library I am
using the Robot as a model....)  I am checking several
queues for processing within the system.  ( unless
someone can tell me how to redirect to a file at the
same time as to the parser...)
One of the queues calls HTLoad_toFile
One of the Queues will recreate a new Anchor to walk
One of the queues handles the parsing of the requests
as they exist from the main loop.  It is in no way
elegant but I am very pressed to complete this.

As I see it the segmentation fault occurs when 2
incoming streams complete at the same time.  And there
is only one entry in the queue.
Walla...I have a race condition....
The first tests the queue and removes an entry.
The second one comes in behind ... tests the
queue...still positive... removes entry....bang.
Well As I am getting back into C programming after a
hiatus I am trying to figure out a way to clear this
queue. A basic integer flag does not work (it appears
that the parse routine can interject and timeout
preventing links from being added).. perhaps a
semiphore or a mutex the implementation of how I can
do this is eluding me I think the former might work
but need to see if the blocking will interfere with
the eventloop....  Any Ideas anyone? I'm sure someone
has done something like this.

As to the lost requests.  Yes I do see it as I
described earlier. Usually the system will hang.  It
appears to be when 2 requests are being made to the
same URI.  when one completes all of the timers appear
to be cleared.  need to investigate this further.  As
I am for now reusing the robot code I am not sure why
this is occurring at this time.  I am not creating
timers of my own at this time.

As to the memory leak I have not seen a significant
one at this time...however I can maybe get 200
requests processed before the system either hangs
without timers or dies from the sigfault.  However the
HTAnchor problem makes sense to me from what I have
seen...Something I will be aware of.

--- Azzurra Pantella <azzurra.pantella@netikos.com>
> I think you are right!
> It can't really be called a bug but a sort af
> unexpected and undesired
> behaviour, not to say a programming error.
> In fact which reason might there be not to recover
> after a write which
> resulted in a broken pipe?
> I consider the side effect of loosing requests quite
> serious.
> But let me ask you something:
> -Have you too observed some requests loss?
> -When you talk of timers, do you mean the HTTimer
> object bound to the output
> stream defined in HTBufWrt.c?
>   If this is the case,   why ,in your opinion,
> checking out the value of
> this timer in the HTBufferWriter_lazyFlush()?
>   Why not cheking and eventually dispatching its cbf
> function only in the
> HTEventListLoop (HTEvtLst.c) as we do for
>   any other timers? Recovery would be quite easy and
> hopefully safe.
>  -Are you too having memory growth when submitting
> many requests to the same
> host ?
>  If so you may refer to other mails with subject
> "HTAnchor" in the mailing
> list . In fact there are no more doubt that the 
> HTAnchor objects are  never
> canceled until the end of the program execution and
> that a new HTAnchor
> object is created for every new request to the same
> host  if the urls  are
> different.
> Regards,
> Azzurra

Do You Yahoo!?
Check out Yahoo! Shopping and Yahoo! Auctions for all of
your unique holiday gifts! Buy at http://shopping.yahoo.com
or bid at http://auctions.yahoo.com
Received on Monday, 17 December 2001 23:24:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:33:54 UTC