- From: Nikunj R. Mehta <nikunj.mehta@oracle.com>
- Date: Mon, 4 Jan 2010 16:59:01 -0800
- To: Joseph Pecoraro <joepeck02@gmail.com>
- Cc: public-webapps@w3.org
On Jan 4, 2010, at 1:26 PM, Joseph Pecoraro wrote: >>> - 4.1 Introduction >>> http://dev.w3.org/2006/webapi/DataCache/#datacache-intro >>> >>> review policy says "(only for unsafe HTTP methods)". >>> >>> Why discriminate? I could see an application wanting to review a GET >>> request. For example, an application which can be updated by >>> multiple >>> clients. It may be useful to examine the data returned from a GET, >>> which may contain information data that the other clients POST/PUT. >> >> Firstly, an application cache is shared by all those who have the >> same manifest (provided they are all in the same origin). > > I meant "multiple clients" more broadly. Such as multiple users, on > different machines. Got it > > >> Secondly, reviewing a GET response is easy to do with the oncapture >> event handler that is defined on transactions. > > Again, why the special case? Why not just naturally include GET in the > reviewable requests. I realize that the oncapture event will only fire > with GET requests, but I don't see any advantage to separating it > out completely. > Of course, GET requests can be reviewed, just like other requests. I was trying to explain that the [[oncaptured]] event fits better to the example you gave above. > >> Thirdly, the review policy can only be used through the >> offlinehandler, i.e., embedded local server, and activated by the >> user agent. Therefore, if an application wishes to review, it >> should be prepared to intercept. If it is not, it should use the >> [[captured]] events for examining data returned from a GET. > > I agree, I have no complaints here about offlinehandlers. However, > "if an application wishes to review" why should it need to "use the > [[captured]] events for examining data returned from a GET" instead of > just using the review handler it already set up? > The review function is defined to be used in conjunction with the interception function. The choice between which function to use is left to the user agent and takes place as defined in the networking model. > > >>> - IDL Descriptions >>> HttpRequest#bodyText and HttpResponse#bodyText >>> >>> [.. snip ..] >>> >>> Is there a generalization that can be applied here? A white-list is >>> likely overly restrictive, and not future proof. >> >> I have changed the two descriptions so that there is no longer a >> restriction on the use of this attribute with specific MIME types. >> I have provided XMLHttpRequest style descriptions of these two >> attributes > > Excellent, this is much better. Thanks. > > I have no experience in Specification writing. Is there some way, > like you have done with the HTML5 terminology, to point out that this > section refers / should be kept in sync with the XHR description; > to prevent one or the other getting out of date? > I considered but decided against that since the XHR spec relies on a number of concepts that I don't wish to bring over to this spec. Additionally, the XHR spec only deals with response body, and not with request body text. > >>> - Usefulness of incrementPendingUpdates / decrement >>> http://dev.w3.org/2006/webapi/DataCache/#widl-CacheTransactionRequest-incrementPendingUpdates >>> >>> Is the number of updates useful, or even accessible? It seems as >>> though developers only really need to know if there are any updates >>> or not. A "dirty" flag boolean for instance. >> >> The use case is that of a browser that doesn't keep multiple >> applications, i.e., Web pages open. When you use an application off- >> line and create dirty data in the cache, you want to provide an >> opportunity for that dirty data to be flushed to the server. Since >> the application may not have any way to communicate with a user >> that it has dirty data when the user agent is in a position to >> communicate externally, it is necessary for the user agent to keep >> track of the existence of dirty data. The count information merely >> provides some idea to a user as to how large the dirty data has >> become. > > It sounds like the reason these methods exist are to manipulate the > otherwise readonly pendingUpdates attribute of ApplicationCache2 to > "provide some idea [...] as to how large the dirty data has become". > If that is the case, then it would be useful to have a number value > that is persistent that you can easily adjust. > > I have a feeling that changes by 1 may be overly restrictive, but > I will wait for usage feedback. It may be that increment / decrement > operations work very nicely with the rest of the API. > > I am open to changing this API once we have some usage experience. > > Thanks for the clarifications and fixes for the other issues. You are welcome. Nikunj Mehta http://blog.o-micron.com
Received on Tuesday, 5 January 2010 01:00:24 UTC