W3C home > Mailing lists > Public > www-xsl-fo@w3.org > May 2008

Re: Sanitizing data

From: JohnVirgo <john.virgo@document.co.uk>
Date: Wed, 14 May 2008 01:01:26 -0700 (PDT)
Message-ID: <17225696.post@talk.nabble.com>
To: www-xsl-fo@w3.org


David,

Sorry about not providing a sample, the data was sensitive; I 've tried to
sum up the data here:

<page>
  <page_type>
    ...
    <page_title>
      <code>10</code>
      <value>Page title</value>
    </page_title>
    <page_kind>
      <code>11</code>
      <value>Z1</value>
    </page_kind>
    ...
  </page_type>
  <page_data>
    ...
    <page_prev>
      <page_before>
        <!-- page_ref name and count differs -->
        <page_ref>
          <code>62</code>
          <value1>ABC</value1>
          <value2>123</value2>
          <value3>XYZ</value3>
          ...
          <!-- A sub group of nodes can appear under here -->
          <valuex>
            ...
            <somevalues>1,2,3</somevalues>
            <someotherdata>a,b,c</someotherdata>
            ...
          </valuex>
        </page_ref>
      </page_before>
    </page_prev>
    ...
  </page_data>
</page>

I believe that effective use of the 'otherwise' processing instruction may
save me here. My problem is that the value of 'page_ref/somevalues' could
mean that I need to process 'page_title/value' differently. This is part of
my problem, the source daata contains a mix of processing insructions
alongside the data; which in turn affects part of the data which is in a
completely path.

I think that by counting expected numbers of nodes and using 'otherwise' I
should be able to trap all conditions that should not occur.

Thanks,
JV
-- 
View this message in context: http://www.nabble.com/Sanitizing-data-tp17080137p17225696.html
Sent from the w3.org - www-xsl-fo mailing list archive at Nabble.com.
Received on Wednesday, 14 May 2008 08:02:29 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 14 May 2008 08:02:29 GMT