- From: Mike Meyer <mwm@contessa.phone.net>
- Date: Sun, 16 Aug 1998 13:12:09 PST
- To: www-html@w3.org
> > No, that's not what I'm saying. After you eliminate the fluff (like > > random quotes) and bad ideas (like pages that change the HTML based on the > > client), you're left with either real dynamic data - for which SSI > > generally isn't sufficient - or things that tend to be built from parts, > > and then served to many people with the same content. Given that disk is > > cheap, upgrading disk is cheaper than upgrading CPU, and the cache > > problems mentioned above, the latter is *much* better done by building the > > page once and updating it when one or more parts of the data change. > > Let's say you've got a few hundred pages on your site, and would like to > use a common header and footer for the document? Now let's say after a > while you decide to change the content of the header and footer, to give the > site a new look... SSIs are wonderful for this purpose. Building a static distribution tree is even better. Your solution spends CPU cycles every time someone touches any page, and probably screws up caches all over the network. Worse yet, most (all?) SSI implementations put the include for the header and footer is in SGML comments, and hence isn't visible to validation tools based on the SGML tools. Instead of including magic cookies for SSI, use either PIs or define SGML entities for the header and footer, and make the publication process normalizing the pages appropriately. You now have to issue one extra command (the equivalent of "make all") after replacing the header and footers. You have to spend the disk space to keep the pre-built files around. But it's *much* easiwer to add more disk than to add more CPU or network bandwidth, and those are the machine resources you're buying back by doing it this way. <mike
Received on Sunday, 16 August 1998 16:18:38 UTC