Re: Segfault in w3c with PUT

At 10:39 7/31/98 -0500, Matthew P. Bohnsack wrote:
>the latest CVS version is working much better
>2MB, 5MB, 10MB, 20MB, no problem!  This is probably
>good enough for my application, however at 30MB, I get 
>
>
>geronimo:/usr/src/libwww/ComLine/src# ./w3c -put test_data -dest
>http://merced/lala
>Looking up merced
>Contacting merced
>Writing...
>Reading...
>Writing...
>HTBufWrt.c:146 failed allocation for "HTBufferWriter_addBuffer" (30735915
>bytes).
>Program aborted.
>Abort

It simply runs out of memory. I think libwww is a bit greedy when
allocating memory for PUT - in the beginning I tried to make it smaller but
the amount of errors that can happen with an HTTP PUT requires more or less
the whole thing to stay in memory.

It may be worth looking at some optimizations in this part of the code
(mainly the HTBufferWriter stream where the allocations fails).

Hope you can get by for now. An alternative would be to compress the data
(if not already) using deflate. I would have to see how this works in a
PUT, I only handle in a GET for now.

Henrik
--
Henrik Frystyk Nielsen,
World Wide Web Consortium
http://www.w3.org/People/Frystyk

Received on Friday, 31 July 1998 12:14:58 UTC