- From: Jeni Tennison <jeni@jenitennison.com>
- Date: Thu, 6 Feb 2014 16:59:19 +0000
- To: www-tag@w3.org
Hi, Thanks all for the discussion around the packaging proposal that I put together. My reflections are: 1. The requirements for packaging for efficient delivery are different from the requirements for creating self-contained packages. We might want to separate these requirements. 2. I feel like the evidence for a requirement for more efficient delivery is quite hand-wavy at the moment. Does anyone know of any concrete metrics for how much better the world would be if there were packages providing more efficient delivery of web resources? 3. On the format, we would need a separate content type, not a multipart/* one, in order to get rid of the necessity for a boundary parameter (that requirement is fixed for all multipart/* types). 4. As Mark Baker suggests, we should look at whether the observations that Alex makes about pre-parse scanners firing off requests immediately based on references in an HTML page also apply to the pipelining solutions in HTTP2. 5. I’d like to have a better understanding of the trade-offs between putting resources in a package and requesting them separately. Plainly there is the initial connection overhead, but isn’t it going to be more efficient to download 4 files in parallel than one after another? What are the trade-offs? Are they different for low/high bandwidth and low/high latency? How does the size of the resources intersect with the number of resources being requested? Basically: in what conditions is it worth packaging resources? These seem to be common questions for pipelining too. Does anyone know of an analysis of this type? 6. The requirements being placed on a solution are: * the package is returned from a single request * it is easy to implement naively in (eg) Apache or GitHub Pages I suspect (but don’t know) that there is less of a requirement for packaging from sites that are naively implemented using Apache and GitHub Pages, compared to resource-heavy, highly-optimised sites which are more likely to be do things like set response headers and status codes. I would like to understand how the sites that could benefit from packaging intersect with those that are implemented without control over HTTP headers or status codes. Any ideas about how to make that assessment? 7. I am strongly of the opinion that imposing a new interpretation of a particular URL syntax is a Bad Idea, for the reasons outlined in the proposal, especially just to give a short-term solution to address something that will eventually be addressed through HTTP2. I would like to see at least: * evidence that !/ doesn’t exist in current URLs * evidence that it is easy to create the required directory structures for serving on Apache and GitHub Pages I can try out the second of these myself obviously, but evidence about the use of !/ in URLs will have to come from elsewhere… Cheers, Jeni -- Jeni Tennison http://www.jenitennison.com/
Received on Thursday, 6 February 2014 16:59:43 UTC