FW: Program cores on subsequent downloads

OOPS !! This is in HTTP.c. However, there are about 9-10 other files that
have similar modifications (HTNet.c being one of them). You may just want to
go out to the CVS tree and see all of them.

Andy

> -----Original Message-----
> From: Yovav Meydad [mailto:yovavm@contact.com]
> Sent: Wednesday, May 17, 2000 11:35 AM
> To: 'Andy Levine'
> Cc: 'Warren Ho'; www-lib@w3.org
> Subject: RE: Program cores on subsequent downloads
>
>
> In which file is this fix ?
>
> Yovav
>
> -----Original Message-----
> From: www-lib-request@w3.org [mailto:www-lib-request@w3.org]On Behalf Of
> Andy Levine
> Sent: Wednesday, May 17, 2000 5:30 PM
> To: Warren Ho; www-lib@w3.org
> Subject: RE: Program cores on subsequent downloads
>
>
> Warren,
>
> Check out the CVS tree for libwww. There appears to be a fix for this
> problem (or at a minimum, a work around) but it is NOT in the mainline
> branch. It is in the branch AMAYA and revision 1.186.2.1 has a check to
> ensure that the http pointer is valid before accessing it.
>
> Andy
>
> > -----Original Message-----
> > From: www-lib-request@w3.org [mailto:www-lib-request@w3.org]On Behalf Of
> > Warren Ho
> > Sent: Friday, May 12, 2000 4:45 PM
> > To: www-lib@w3.org
> > Subject: Program cores on subsequent downloads
> >
> >
> > Hi,
> >
> > I have a GUI app which performs a file download from some URL.  I have a
> > C++ class which wraps some of the libwww functions to perform the actual
> > download.
> >
> > I am able to download the file successfully on the first try, or on
> > subsequent tries.  However, the program will eventually core in the
> > libwww code as followed:
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > HTTPCleanup (req=0x1175290, status=200) at HTTP.c:159
> > 159         if (http->timer) {
> > Current language:  auto; currently c
> > (gdb) bt
> > #0  HTTPCleanup (req=0x1175290, status=200) at HTTP.c:159
> > #1  0x7e15e4 in HTTPEvent (soc=0, pVoid=0x11aabd0, type=HTEvent_CLOSE)
> >     at HTTP.c:1277
> > #2  0x802d24 in HTNet_kill (net=0x11782e8) at HTNet.c:989
> > #3  0x802de4 in HTNet_killAll () at HTNet.c:1015
> > #4  0x800b9c in HTLibTerminate () at HTLib.c:190
> > #5  0x7cc784 in HTProfile_delete () at HTProfil.c:47
> > #6  0x1825e4 in HRWWWTransferInterfaceImpl::cleanup (this=0x116ed78,
> >     request=@0xefffeab4) at
> > ../../../Interface/WWWTransferInterface.C:427
> >
> > The code in my download function is:
> >
> > HRError
> > HRWWWTransferInterfaceImpl::getFile(
> >     const RWCString & strURL,
> >     const RWCString & strOutputFilename,
> >     HRProgressMonitor *pProgress /* = NULL */)
> > {
> >     FILE* outputFile = NULL;
> >     int   status =     0;
> >
> >     // Initialisation
> >     HTSetTraceMessageMask("sop");       // Show all traces
> >     HTRequest * request = HTRequest_new();
> >     HTRequest_setOutputFormat(request, DEFAULT_FORMAT);
> >     HTRequest_setContext (request, (void*)this);
> >
> >     HTProfile_newNoCacheClient(APP_NAME, HISTORISK_RELEASE_STRING);
> >
> >     // If we crap out before calling the terminate callback with a
> > successful
> >     // load, we want some sort of general failure
> >     i_errStatus = HRERR_INIT_FAILURE;
> >
> >     i_pProgress = pProgress;
> >
> >     // Capture output and trace messages
> >     HTPrint_setCallback(printerCB);
> >     HTTrace_setCallback(tracerCB);
> >
> >     // Delete the default Username/password handler and replace it with
> > our own.
> >     HTAlert_deleteOpcode(HT_A_USER_PW);
> >     HTAlert_add(usernameAndPasswordCB, HT_A_USER_PW);
> >
> >     // Add default content decoder. We insert a through line as it
> > doesn't
> >     // matter that we get an encoding that we don't know.
> >     HTFormat_addCoding("*", HTIdentityCoding, HTIdentityCoding, 0.3);
> >
> >     HTParentAnchor * anchor = (HTParentAnchor *)
> > HTAnchor_findAddress(strURL);
> >
> >     // Delete the default progress notification  handler and replace it
> > with our own.
> >     HTAlert_deleteOpcode(HT_A_PROGRESS);
> >     HTAlert_add(progressNotifyCB, HT_A_PROGRESS);
> >
> >     outputFile = fopen(strOutputFilename, "wb");
> >     if (outputFile == NULL)
> >     {
> >         cleanup(request);
> >         i_errStatus = HRERR_CANT_OPEN_FILE;
> >         i_errStatus << strOutputFilename;
> >         i_errStatus.logError();
> >         return i_errStatus;
> >     }
> >
> >     HTRequest_setOutputStream(
> >         request,
> >         HTFWriter_new(request, outputFile, YES));
> >
> >     // Set event timeout
> >     HTHost_setEventTimeout(i_timeoutSeconds * MILLI_PER_SEC);
> >
> >     // Make sure that the first request is flushed immediately and not
> >     // buffered in the output buffer
> >     // Note (marka) I'm not sure if this is necessary, it was copied
> > straight
> >     //              out of the libwww example
> > //    HTRequest_setFlush(request, YES);
> >
> >     // Add our own filter to update the history list
> >     HTNet_addAfter(terminateCB, NULL, NULL, HT_ALL, HT_FILTER_LAST);
> >
> >     // Start the request
> >     status = HTLoadAnchor((HTAnchor *) anchor, request);
> >
> >
> >     if (status != YES)
> >     {
> >         cleanup(request);
> >         i_errStatus = HRERR_WWW_NO_SERVER;
> >         i_errStatus.logError();
> >         return i_errStatus;
> >     }
> >
> >     // Go into the event loop...
> >     HTEventList_loop(request);
> >
> >     HTEventList_unregisterAll();
> >     cleanup(request);
> >     return i_errStatus;
> > }
> >
> > where my terminate handler is
> >
> > int
> > HRWWWTransferInterfaceImpl::terminateCB(
> >     HTRequest * request,
> >     HTResponse * response,
> >     void * param,
> >     int status)
> > {
> >     ...
> >     HTEventList_stopLoop();
> >     return HT_OK;
> > }
> >
> > and the cleanup function contains
> >
> > void
> > HRWWWTransferInterfaceImpl::cleanup(HTRequest * & request)
> > {
> >     if (request != NULL)
> >     {
> >         HTRequest_delete(request);
> >         request = NULL;
> >     }
> >     HTProfile_delete();
> > }
> >
> > Does anyone know whether there is a bug in the library or whether I did
> > something wrong?  I am using the wwwlib 5.2.8 source and the system is
> > UNIX on Solaris 2.6.
> >
> > Thanks,
> > Warren Ho
> >
> > --
> > Warren Ho                                       Algorithmics Inc.
> > Software Engineer                               185 Spadina Avenue
> > email: warrenh@algorithmics.com                 Toronto, Ontario
> > phone: (416)-217-4353                           Canada   M5T 2C6
> >
> >
>

Received on Wednesday, 17 May 2000 11:51:19 UTC