W3C home > Mailing lists > Public > www-lib@w3.org > January to March 2001

bug in HTSChunk.c

From: Jens Meggers <jens.meggers@firepad.com>
Date: Tue, 27 Mar 2001 09:55:21 -0800
Message-ID: <DDF913B74F07D411B22500B0D0206D9F18547D@FIREPLUG>
To: "'www-lib@w3.org'" <www-lib@w3.org>
Hi libwww users,

I found a small bug in HTSChunk.c:

In 

PUBLIC HTStream * HTStreamToChunk (HTRequest * 	request,
				   HTChunk **	chunk,
				   int 		max_size)
{

there is a max size parameter that allows to specify the maximum amount of
data to be loaded into a chunk. This feature seems reasonable because we do
not want to overload our system with a huge object.
The documentation says that the parameter for max_size are:

0 for delfault size
-1 for no size limit
and a number > 0 for a size limit.

This was not working, because the maximum size of the HTStream is set in
HTStreamToChunk() to

	me->max_size = (!max_size) ? max_size : HT_MAXSIZE;

,meaning when max_siye is != 0, it is set to HT_MAXSIZE

the code that we want is 

	me->max_size = (max_size) ? max_size : HT_MAXSIZE;


In the current version, the code is running perfectly when setting max_size
to 0. In this case, there is no size restriction. This is because in
HTSC_putBlock() there is a if statement that check for me->max_size > 0.
This has been done for detecting the -1 value, but also does work for a
value of 0.
In case my patch will be accepted, you might want to check your code and
change all calls of HTStreamToChunk() with set max_size = 0 to max_size = -1
before upgrading to a new version.

Best Regards,

Jens 
Received on Tuesday, 27 March 2001 13:05:57 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 23 April 2007 18:18:39 GMT