W3C home > Mailing lists > Public > www-lib@w3.org > July to September 1998

Re: Segfault in w3c with PUT

From: Henrik Frystyk Nielsen <frystyk@w3.org>
Date: Fri, 31 Jul 1998 12:15:14 -0400
Message-Id: <3.0.5.32.19980731121514.02fd66e0@localhost>
To: "Matthew P. Bohnsack" <bohnsack@eai.com>, www-lib@w3.org
At 10:39 7/31/98 -0500, Matthew P. Bohnsack wrote:
>the latest CVS version is working much better
>2MB, 5MB, 10MB, 20MB, no problem!  This is probably
>good enough for my application, however at 30MB, I get 
>
>
>geronimo:/usr/src/libwww/ComLine/src# ./w3c -put test_data -dest
>http://merced/lala
>Looking up merced
>Contacting merced
>Writing...
>Reading...
>Writing...
>HTBufWrt.c:146 failed allocation for "HTBufferWriter_addBuffer" (30735915
>bytes).
>Program aborted.
>Abort

It simply runs out of memory. I think libwww is a bit greedy when
allocating memory for PUT - in the beginning I tried to make it smaller but
the amount of errors that can happen with an HTTP PUT requires more or less
the whole thing to stay in memory.

It may be worth looking at some optimizations in this part of the code
(mainly the HTBufferWriter stream where the allocations fails).

Hope you can get by for now. An alternative would be to compress the data
(if not already) using deflate. I would have to see how this works in a
PUT, I only handle in a GET for now.

Henrik
--
Henrik Frystyk Nielsen,
World Wide Web Consortium
http://www.w3.org/People/Frystyk
Received on Friday, 31 July 1998 12:14:58 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 23 April 2007 18:18:28 GMT