- From: David Mohring <heretic@ihug.co.nz>
- Date: Sat, 29 Jul 2000 07:46:16 +1200
- To: www-xml-packaging@w3.org
"Simon St.Laurent" wrote: > > At 05:44 AM 7/29/00 +1200, David Mohring wrote: > >Then this is something that is done on the server the http protocol level. > > I don't object to servers using compression within the http protocol. > > I don't think, however, that we're anywhere near ready to start > recommending how related resources should be delivered to clients. > I sorry if my post sounded like a recommendation, as I mentioned the my original post it was is only a suggestion. > > We're not even out of the discovery starting gate, and trying to figure out > how to set up compound requests that identify which pieces of a > not-yet-discovered set of files seems like something that needs to come > later. > Ok, the following then is only conjecture on my part Assuming 1) To the end user/client a compound document exist as a series of linked views ( an xml/[x]html document/index ) and resources embedded within a single file or collection of files. 2) The compound document request is going to be handled by and at an http server end using existing http request/post/get + cookie protocols. ( Or is W3 going to require a new protocol like soap? be added to each browser/application ) 3) The server returns a reference URI ( via redirection ) and content that the end user/client can use to access the linked views and resources even when disconnected from the server or the internet. therefore Unless you are going to modify the http protocols to include new negotiation protocols to get the clients browser / applications to download the missing elements, adding a "push like" protocol ( an IETF matter not W3 ), then the resulting request of #3 is going to have to return a single file containing all the content elements missing at the clients end. So what format for that single file? David Mohring - .NET?
Received on Friday, 28 July 2000 15:42:17 UTC