Re: Size of chunks in ToChunk methods?

	However, in LoadToFile...  HTLoadFile() is used to grab the 
requested data.  I would like an interface to a stream that I could pull 
buffers(chunks) off.

// In HTLoadFile form
	// Load data to file.
	status = HTLoadToFile(getme, request, outputfile);
	// open file with data from HTLoadToFile
	if(fopen(outputfile,"r")) {
		// Deal with data.
	}

// What I would like to see in libwww.
	// Open a stream to pipe request data to.
	status = HTLoadToStream(url,output,request);
	// Read data as it comes. (maybe transfer from stream to chunk)
	while(somereadfunc(output)) {
		//Deal with data.
	}

I think that it would be more efficient to just deal with the data as it 
comes down the stream.  This method would reduce the number of 
complications (file errors, disk i/o errors).

Does this exist??  Are my models correct??  Am I way off base?  

--Dave Horner

On Sun, 4 Apr 1999, Henrik Frystyk Nielsen wrote:
> 
> 
> > Dave wrote:
> > 
> > Is the chunk created by:
> > chunk = HTLoadAnchorToChunk(anchor, request);
> > The size of the file being requested?
> > Is there any way to grab the requested file by lines or by buffer?
> > I have looked at the source and it looks like you malloc the whole
> > file into memory?!
> > Could this be a problem with big files (100's of megs)?
> 
> Yes indeed - and also you have to wait for the whole thing to come in.
> If you want to use "progressive rendering" of a document then you have
> to use streams instead of the chunk version
> 
> As an example of using streams, the sample app 
> 
> 	http://www.w3.org/Library/Examples/LoadToFile.c
> 
> linked from
> 
> 	http://www.w3.org/Library/Examples/
> A
> in fact uses streams because the HTLoadFile() in
> 
> 	http://www.w3.org/Library/src/HTAccess.html
> 
> uses streams.
> 
> --
> Henrik Frystyk Nielsen,
> World Wide Web Consortium
> http://www.w3.org/People/Frystyk
> 

Received on Monday, 5 April 1999 01:07:43 UTC