W3C home > Mailing lists > Public > www-lib@w3.org > July to September 1998

Re: Segfault in w3c with PUT

From: Bob Racko <bobr@dprc.net>
Date: Thu, 30 Jul 1998 03:59:15 -0400
Message-Id: <3.0.3.32.19980730035915.02da79b0@shell14.ba.best.com>
To: www-lib@w3.org
from the list of messages in the original post by  Matthew P. Bohnsack
I have included below all the ones that have raised my eyebrows.
* is next to msgs I have seen at other sites over a timeout or 
'soft' transfer limit [some in earlier versions of libwww].

*Buffer...... Waiting 30ms on 0x8062798
 EventOrder.. no event found for socket 4, type HTEvent_WRITE.

*WWWLibTerm.. Cleaning up LIBRARY OF COMMON CODE
*Net Object.. Kill ALL Net objects!!!
 Net Object.. Killing 0x8051450

 HTTPGen..... ABORTING...
 HTTPRequest. ABORTING...
 Buffer...... ABORTING...
 FileWriter.. ABORTING...
*MIMERequest. ABORTING...
 Net Object.. Delete 0x8051450 and call AFTER filters
 Host Object. keeping persistent socket 4
 Channel..... Delete 0x805a738 with semaphore 1
 HTTPStatus.. ABORTING...
 Buffer...... ABORTING...
 Channel..... Semaphore decreased to 0 for channel 0x805a738

*HTError..... Generating message
 WARNING: Fatal Error: Data transfer interrupted

 Load End.... Request ended with code -902
*Net After... calling 0x8049a50 (request 0x804bf30, response 0x8072b30,
 status -902, context (nil))
 HTError..... Generating message
 Request..... Delete 0x40104c94
*(onexit)Segmentation fault

cause:

MY experience is that many ISP gateways and some firewalls can be configured
to softly limit transfers to 2Mb.  Soft?  If you were worried that
sensitive company data was being downloaded or uploaded
you might just force a "stall" on big moves till you could verify the
source/destination of the transfer.  Some packet filters
buffer up the entire transfer if there is suspicious content till it
can be checked for virus ... particularly if it uses a protocol or
type-of-service that is little-used (like PUT).

A large transfer can also be a problem on some protocol stacks where there
is improper handling of the final FIN-ACK in the TCP/IP protocol layer.
[Ever try to download the latest Netscape Browser and stall at the very end?
- on win95 the file is complete but not written to disk
- on NT it is complete on the disk but gets deleted when
 you try to stop the dialog box!]

Alternatively, there is a backoff in service through some gateways
so that longer transfers get less and less priority. Eventually
you timeout.

Consider also that the server you are PUTing to figures in heavily in this.
What type is it? Is /destination a CGI app?
The /destination may not be able to handle a large buffer
and may be choking. This choke may ripple back upstream
to your application (the command line tool) and libwww.


effect:

Segmentation on a shutdown/exit can be caused by corruption in the
onexit/atexit handling or in the heap in general. 
[Double frees or free something that wasn't alloced or overwrite of
something after you free it]


Try this: -Temporarily- disable the call to "free()" in HTMemory.c.
Then see if you still get a SEG fault.  

Tools are being built to help discover
problem areas like this... Watch this space.





{-----}
bobr@dprc.net
Received on Thursday, 30 July 1998 04:01:05 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 23 April 2007 18:18:28 GMT