- From: Mike Akerman <mike@cavern.uark.edu>
- Date: Tue, 20 Nov 2001 15:22:21 -0600 (CST)
- To: Paul Kowlessar <Paul_Kowlessar@CdnAir.Ca>
- cc: www-xsl-fo@w3.org
> Hi, > > I am running fop .20 to transform my xml files to pdf files. Everything > is working fine until I try to parse/transform a large xml file, 4 > megs. I eventually run out of memory. From what I understand this is > a fop problem due to the fact that fop keeps everything in memory. I > have however, found work-a-round. By using multiple fo:page-sequences > for my xml document, I should be able to decrease my memory usage. Here > is where I am stuck. I am not sure how to use multiple page-sequences > for the following xml doc. I've had the same problem for large files, and I just adjust the memory size of my java vm. From a unix box its: FOP_OPTS="-Xmx256m"; export FOP_OPTS FILENAME="test" /export/home1/webapps/library/Fop-0.20.2/fop.sh "$FILENAME.fo" -pdf "$FILENAME.pdf" Michael Akerman > > <page> > <employee> > <stuff1/> > <stuff2> > <stuff3> > </employee> > <employee> > <stuff1> > <stuff2> > <stuff3> > </employee> > .... > .... > .... > </page> > > Can someone help me? > > Cheers...Paul >
Received on Tuesday, 20 November 2001 16:23:44 UTC