W3C home > Mailing lists > Public > www-jigsaw@w3.org > November to December 1996

Generating large amount of data

From: Anselm Baird_Smith <abaird@www43.inria.fr>
Date: Tue, 17 Dec 1996 09:05:55 +0100 (MET)
Message-Id: <199612170805.JAA07848@www43.inria.fr>
To: Sergej Melnik <melnik@aix520.informatik.uni-leipzig.de>
Cc: www-jigsaw@w3.org
Sergej Melnik writes:
 > This question might have already been stated, sorry about that:
 > I have used mechanisms analogue to ProcessFeeder in CgiResource for
 > generating some output from a database (actually, much output). Worked
 > fine except for the fact that the Feeder thread kept running after
 > disconnecting by pressing Stop in the browser, because nobody was
 > reading data out of the pipe output stream, so it got filled up by the
 > Feeder.
 > To kill such threads I wrote a small monitor that checks whether the
 > output stream of the corresponding client has changed.
 > Silly, isn't it? Could anybody point out what the right way is?

Note that the CGI script is somehow  a different situation since the
thread is used to feed the process input (it is not the output sent to
the client).

I am wondering why you need a thread in your case, it would probably
be better (as far as I understand) to write a subclass of InputStream
that takes your database as a "parameter" (whatever this means), and
which generates the client output on the 'read' method (this is used,
for example in the SSI resource). When Jigsaw detects a close on the
client socket it will close your stream (call its close method), at
which time you can do any cleanup.

The next release of Jigsaw adds a w3c.jigsaw.http.Client:isInterrupted
method that you can also call during processing to check whether the
client has closed the connection or not (if you implement my above
suggestion, I don't think you need it though).

Hope this helps,
Received on Tuesday, 17 December 1996 03:10:41 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:41:21 UTC