W3C home > Mailing lists > Public > www-lib@w3.org > April to June 1999

HTLoadToChunk

From: <Jim_Ravan@avid.com>
Date: Tue, 6 Apr 1999 11:46:23 -0400
To: www-lib@w3.org
Message-ID: <8525674B.0056A517.00@amm02.avid.com>


I am trying to test the latest CVS sources under NT. My code does an
HTLoadToChunk to read a file from the network.

     chunk = HTLoadToChunk(url, request);
     if (chunk)
     {
          // Wait for libwww request to complete.
          err = LibWWWWait(request);
          if (err == 0)
          {
               // Get data address and length.
               chunkBuffer = HTChunkData(chunk);
               chunkLen =  HTChunkSize(chunk);

               // Process the chunk.
               err = ProcessChunk(chunkBuffer, chunkLen);
          }
     }

The LibWWWWait routine starts the HTEventLoop and waits for an after filter
to stop it, at which point, it returns. The problem is with the semantics
of HTLoadToChunk(). The file being loaded contains something like the
following:

     ; This is a comment

     This is a line of text to be processed

Notice that there is a blank line after the comment line. With libwww 5.2,
the chunk encompassed the entire body, that is, HTChunkSize() returned the
size of the entire body. But with the latest CVS sources, HTLoadToChunk()
only returns the first line, that is, HTChunkSize() returns the length of
the first line of the file, not the length of the entire file. This used to
work. Can anyone enlighten me as to where I could look to fix this? This is
a bug, yes?

regards,
-jim
Received on Tuesday, 6 April 1999 11:51:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 23 April 2007 18:18:29 GMT