W3C home > Mailing lists > Public > public-coremob@w3.org > July 2012

Draft Summary of Coremob F2F 28 and 29 June 2012

From: Jo Rabin <jo@linguafranca.org>
Date: Sat, 28 Jul 2012 19:40:53 +0100
Message-Id: <111F500F-C326-4533-B271-859667CEA621@linguafranca.org>
To: W3C Coremob Community Group <public-coremob@w3.org>
Hi folks, sorry for the delay in getting this out, here follows a draft summary of the F2F meeting in Palo Alto hosted by Facebook, 28 and 29 June 2012.

Many thanks


Summary of W3C Core Mobile Framework Community Group (Coremob) F2F 28 and 29 June 2012

Jo Rabin, co-Chair

26 July 2012

1. Introduction

1.1 Agenda

This was the original agenda for the meeting [1]:

Monday, June 25.

9:00am - 9:30am	Registration & coffee
9:30am - 11:00am	Intro and Goals
11:00am - 11:30am	COFFEE BREAK
11:30am - 12:30pm	Coremob level 1 use cases, requirements and spec.
12:30pm - 1:30pm	LUNCH
1:30pm - 3:00pm	Coremob level 1 use cases, requirements and spec.
3:00pm - 3:30pm	COFFEE BREAK
3:30pm - 5:00pm	Coremob level 1 use cases, requirements and spec.
6:30pm -	Group dinner in Palo Alto

Tuesday, June 26.

9:00am - 10:30pm	Testing (un-conference format).
10:30am - 11:00am	COFFEE BREAK
11:00am - 12:30pm	Testing (un-conference format).
12:30pm - 1:30pm	LUNCH
1:30pm - 3:00pm	CG position on vendor prefixes.
3:00pm - 3:30pm	COFFEE BREAK
3:30pm - 4:30pm	Beyond coremob level 1. Brainstorm use cases and requirements.
4:30pm - 5:00pm	Wrap-up.

[1] http://www.w3.org/community/coremob/wiki/F2F/2012/June#Agenda

1.2 Summary of the Meeting

The main objectives of the meeting were achieved. Namely a) to resolve open questions on CoreMob Level 1 to allow progress with editing it and b)  to determine an approach to testing and define the next steps on it.

In respect of a) the first day's discussion resolved a substantial number of issues allowing it to progress to next draft. Section 2 of this note provides more detail.

In respect of b) the second day's discussion resulted in consensus around an outline for a testing framework and resolved a number of matters that were considered either to be in scope or out of scope. Section 3 of this note provides more detail.

1.3 References

Day 1 Minutes http://lists.w3.org/Archives/Public/www-archive/2012Jul/att-0006/minutes-2012-06-25.html
Day 2 Minutes http://lists.w3.org/Archives/Public/public-coremob/2012Jun/att-0143/minutes-2012-06-26.html
Level 1 Spec, as discussed at the meeting http://coremob.github.com/level-1/pub/coremob-1-20120619.html
Level 1 Spec, current draft http://coremob.github.com/level-1/index.html
Tracker https://www.w3.org/community/coremob/track/

2. Day 1

2.1 Introductions etc.

The group was welcomed by James Pearce representing our host Facebook. Those present introduced themselves and what they were hoping to get out of the meeting.

2.2 Coremob vs Ringmark

Lead by Tobie Langel  the group discussed the continuing confusion between these concepts.

2.3 Scope of work

It was suggested that the group should have completed L1 spec by the end of the year and that progress be made on testing, though it was clear that there are way too many tests for there to be a "comprehensive" test suite.

2.4 Desire for Level 0

A discussion lead by Jo on whether the group should take Level 0 off the shelf and chuck it away, or try to make it into something meaningful and useful.

ACTION-2 - Short document on how to get an L0 out and what it might mean  [on Jo Rabin]
ACTION-3 - Circulate his research on what types of apps require what types of features [on Matt Kelly]

2.5 Level 1

The remainder of the day was spent in a discussion led by Tobie on the various parts of his document.

2.5.1 Abstract and Introduction


A discussion of the current wording of the Abstract and Introduction.

ACTION-4 - Start a discussion on "what is the meaning of mobile web applications?" [Jo Rabin]
ISSUE-18 - Should "core features" actually be core at Level 1, or should we just consider features (in Level 1 Intro) - Created
ISSUE-19 - Lack of a push notification system, important feature, but no sufficient specification at this time - Created

2.5.2 Markup

A discussion initiated by "2.1 UAs must support HTML5 " what does "must support" mean when specs have optional bits. Conclusion: subsetting specs is BAD when it is not done by the groups that created them. 

ACTION-42 - Clarify on the wiki that the active L1 spec document is on github, and describe when things should be discussed on the wiki and when Github issues are used to further the discussion. [on Robert Shilston]
RESOLUTION: Add a conformance section to Level 1
ACTION-5 - Add a conformance section to Level 1 that explains what it means to say "User agents MUST support Foo [FOO]" [on Tobie Langel]
RESOLUTION: Subsetting is undesirable and will be avoided as much as possible; however it is pragmatically required in some cases. When subsetting does happen, it should not be understood as a subsetting of the specification itself but rather as a prioritization of our testing efforts

cf ISSUE-2 The group decided that it wanted to set absolute benchmarks for performance which resulted in two resolutions. Though the following resolutions were subsequently contradicted on Day 2.

RESOLUTION: there is no strong interest in producing relativistic tests at this point in the group, we will keep focusing our Quality of Implementation efforts on absolute measures
RESOLUTION: The group has no taste for making qualitative issues into relative measurement and wishes to continue to try to formulate specific objective tests.

A lengthy discussion on the ins and outs of the current specification of AppCache cf ISSUE-1. It's thought to be broken for a variety of Apps. 

RESOLUTION: CoreMob notes that many developers find AppCache as currently specified to be broken for their requirements or to require workarounds and requests that the HTML WG consider resolving this issue before shipping (either by fixing it in the specification, or by splitting it off to a separate specification that can be fixed standalone)
ACTION-6 - Talk to the HTML WG about fixing/splitting AppCache [on Robin Berjon ]

2.5.3 HTML Media Capture

The fact that it's a WD was the point of discussion under ISSUE-3. Upshot was

RESOLUTION: Close ISSUE-3 wrt media capture being a WD
ISSUE-3 Closed
ACTION-7 - Ask DAP to push HTML Media Capture to LC [on Robin Berjon]

And the capture attribute ISSUE-4 came into question

RESOLUTION: Drop note about capture attribute
ISSUE-4 Closed

2.5.4 SVG

ISSUE-20 - We need to have a way to express how conformance interacts with the availability of hardware - Created
ACTION-8 - Come up with some text for ISSUE-20 [on Robin Berjon]
RESOLUTION: we take all of SVG 1.1. ed2

2.5.5 Meta Viewport

simple HTML pages with various viewport and media query settings (via Robin)

A discussion about ISSUE-5 CSS-ADAPTATION spec currently marked as exploratory

RESOLUTION: we don't care that CSS-ADAPTATION is marked "exploratory" as it will happen anyway
ISSUE-5 Closed

2.5.6 App Config

A discussion about what a silly name "Widget" is and how little market traction it has.

2.5.7 View Orientation

It's important but there's no spec says Robin.
Orientation Lock Use Cases and Requirements by Tobie

WD of The Screen Orientation API (via Wonsuk)

[noted after the meeting 
The Screen Orientation API ed.
http://www.w3.org/TR/screen-orientation/ ]

RESOLUTION: the group asks the editor to update ISSUE-6 to mention that there is replacement technology on the way and that we'll point to it

2.5.9 Full-screen mode

A discussion about "full-screen" and "chromeless" modes. Noted that there is a problem with chromelessness (sic) and PCI requirements [for taking credit cards] lock symbol and so on.
RESOLUTION: CoreMob asks WebApps to include "chromelessness" in its configuration document, and to take PCI requirements into account there
RESOLUTION: both View Orientation and Chromeless are essential requirements and we would like WebApps to include them in configuration
ACTION-9 - S/full-screen/chromeless/ [on Tobie Langel]

2.5.10 Style

Tobie explained that his CSS is rusty - the discussion of Style comes under four headings: Core, Layout, Typography, Animations and Transitions. It was later agreed that this would be adjusted (see ACTION-16)

2.5.11 Style - Core

A discussion about whether to include the whole of CSS 2.1, especially paged media and printing. Answer was, we do.
ACTION-10 - Document that we have printing use cases in the UC&R document [on Tobie Langel]

Noted that this section (and others) would be more easily referred to with section numbers]
ACTION-11 - Add subsection numbers. [on Tobie Langel]

Backgrounds and borders - agreed that it would be included.
Color - likewise
ISSUE-9 Values - discussion about what's needed in what order
ACTION-12 - Propose priorities for CSS Values parts, get agreement from CG, send to CSS WG [on Robin Berjon]
ISSUE-10 Image Values and Replaced Content
ACTION-13 - Propose priorities for CSS Image Values and Replaced Content parts, get agreement from CG, send to CSS WG [on Robin Berjon
ISSUE-11 Momentum Scrolling - there's no specification effort
ACTION-14 - Send use cases about overflow scrolling to www-style [on Tobie Langel]

Jean-Francois initiated a discussion about the Media Queries and the loading of assets for values that can never become true.
ACTION-15 - Draft a non-normative document with Implementation Notes about how to load assets depending on MQs that can never become true (with help from Robin) [on Robert Shilston] - and later with Jean-Francois's assistance too.

Fantasai suggested that this section be divided into Graphical and Processing
ACTION-16 - Split section 3.1 per fantasia's suggestion [on Tobie Langel]

Noted that CSS3 Selectors is missing from this section
ACTION-17 - Add CSS-3 selectors to section 3.1 [on Tobie Langel]

A discussion on responsive images, how there was a CG that worked on it and overloaded the src attribute, how Flexbox answers this problem and will be in CR at the end of July and how there will be 4 implementations by the end of the year. Problem sorted.

More discussion about how Network Friendly API doesn't solve the responsive images question..
ACTION-18 - Write up an informative note about why Network Information API does not solve the responsive images issue [on Tobie Langel]

2.5.12 Style - Layout

Per the above discussion
ISSUE-12 Flexbox is a WD closed

The question was raised as to whether we are intentionally asking for 3D as well as 2D.
ISSUE-21 - Is 3D in scope under 3.2 Layout?

2.5.13 Typography

An inconclusive discussion about the fact that only text-shadow and possibly word break - or was that meant to mean word-wrap? 
ISSUE-13 annotated to reflect this.

2.5.14 Animations and Transitions

Noted that this has the same 3D drag along as above.
ISSUE-22 - Do we consider 3D in scope under 3.4 Animations?

2.5.15 Scripting

It seems we are OK with Ecmascript 5.1

2.5.16 DOM

A discussion about the meaning of SHOULD here reveals hardware dependence concerns [same issue as discussed above see ISSUE-20 and ACTION-8]

2.5.17 Storage

We decided we wanted Quota Management in this Kitchen Sink.
ACTION-19 - Throw in Quota API [on Tobie Langel]

A discussion about Filewriter resulted in
ACTION-20 - Include FileWriter or an alternative to the spec [on Tobie Langel]
ISSUE-23 - Is there an alternative to FileWriter? FileSaver? Something else?

2.5.18 Networking

A question about Web sockets and whether they were very useful ... Wesley Johston thought they were useful for gaming and multiplayer games, and was rewarded with this elliptically worded action
ACTION-21 - Jonston to do something useful [on Wesley Johnston]
ACTION-22 - Provide use cases for WebSockets [on Wesley Johnston]

An inconclusive discussion about ISSUE-15 "Do we need both Shared and Web Workers?" resulted in an exhortation to give Tobie feedback on it.

A general lack of usefulness was noted around Network Info API - but some disagreed:
ACTION-24 - Send a note with the actual real use cases for Network Information (with help from Dan Sun) [on Jean-Francois Moy]

A discussion about online state in HTML5
ACTION-23 - Draft a proposal to drop online events from HTML5 [on Robert Shilston]

2.5.19 Sensors

We wondered if there were valid use cases for proximity
ISSUE-24 - Should we require Proximity Events?

2.5.20 Multimedia

Canvas 2D and Timing Animations APIs - questions about QoI - discussion deferred to discussion of testing.

Discussion about Web Audio - nowhere close enough to ready.

Discussion about access to Contacts and Calendar - noted that this is part of Web Intents
ISSUE-25 - Should Level 1 include Web Intents?

And likewise vibration
ISSUE-26 - Should we include the Vibration API?
ISSUE-27 - Should Level 1 include SSE?

2.5.21 Network

Tobie observed that he hadn't been able to find the spec for mmsto: - Doung-Young later pasted into IRC:

Noted that there's no test suite for HTTP and wondering whether this should be a reference to 1.1 or to bis?
ACTION-25 - Write something about conforming to HTTP/1.1 [on Jo Rabin]
ISSUE-28 - Should the HTTP11 reference go to bis?

2.6 Day 1 Wrap

RESOLUTION: thanks for the authors
RESOLUTION: thanks to our wonderful scribes, we look forward to more of their scribing tomorrow
RESOLUTION: many thanks for Facebook for excellent hosting

3. Day 2 

Another glorious sunny day dawned in Palo Alto, unremarkable in itself, but a pleasure for those of us who had travelled poor, tired and hungry for sunshine - or at least a relief from the interminable rain in our part of the world.

3.1 Testing

The morning started with Robin live editing a file identifying the topics raised from the floor requiring discussion under this topic.

- speed of canvas
- speed of CSS transitions
- audio latency
- audio parallelism
- physics performance (just raw JS performance)
- GC pauses (see ImpactJS)
- page scrolling performance
- touch responsiveness
✓ DOM manipulation (not a real issue)
- Conformance tests
- Ringmark
- blockers for test writing
- test automation
- things that have perceptual outcomes (reftests, audio reftests…)
- Prioritising interoperability issues
- overlaying atop video
- integration with the W3C Test Framework facilities
- Categorising testing/levels (but fragmentation is evil)
- Gaming 2D
- Gaming 3D
- Device-Aware functionality
- e-books
- Multimedia playback (Audio, Video…)
- Core (neworking, application packaging & configuration, HTML…)
- Testing the untestable
- things that don't have adequate test specs of their own (e.g. HTTP)

3.2 QoI

Under this heading it was noted that egregious and random functional errors in implementation are out of scope. However what is in scope is functional conformance but with performance that makes the feature unusable for a specific purpose.

After some discussion, consensus seems to be that for any particular metric setting baselines and levels isn't going to work. For example, video performance may be irrelevant for some types of apps or devices. What emerged then was that for any performance metric, testing would yield a performance value. To make sense of those values advisories would be produced for particular use cases. In recording test results a number would be recorded (like the number of frames/sec attained) for a particular device / browser combination.

An example would be an advisory that to produce a useful game experience the following metrics would need to reach certain thresholds (e.g.): Speed of Canvas, Speed of CSS Animation, Number of Simultaneous Sounds, Latency of Sound Playback. A device primarily intended as a video player could be manufactured and be Level 1 functionally  conformant - yet be unsuitable as a games platform, because the important metrics for being a useful (2012) game platform would not be attained. 

There was a discussion about whether interruptions - such as Garbage Collection, external events such as SMS arrival would count against a score. It was decided not. A discussion about distinguishing between sustained and burst rates concluded that this would be left out of scope for now. Speed of DOM manipulation was felt not to be an issue any more. Consistency of frame rate was felt to be an issue subject to practicality of measuring it. 

RESOLUTION: Interruptions and slowdowns due to factors external to the browser engine are out of scope for our tests
RESOLUTION: We are not going to specify baseline hardware, instead we will test device+browser combos
RESOLUTION: We will specify a number of metrics that will be used to assess the limits of performance of specific device+browser targets
RESOLUTION: We will not be testing burst performance for now
RESOLUTION: We will be testing in isolation

ACTION-26 - Provide numbers for required sprites/fps in games [on Tobie Langel - due 2012-07-03]
ACTION-33 - Document JSGameBench and the approach behind it. [on Matt Kelly]
ACTION-27 - Expeditiously check whether it is practical to measure consistency of framerate [on Robert Shilston - due 2012-07-03].

3.3 Goals of Testing, Writing and Running Tests

A lengthy discussion on the approach to writing tests and the differences in test harnesses - especially the differences in approach between the W3C test suite and that had been taken in producing Ringmark. It was noted that a very substantial body of tests had been produced using the W3C methodology and that it wouldn't be wise to abandon them. It was also noted a browser vendor would be unlikely to comprehensively  exercise all the test since they take too long to run. The W3C test suite is documented at [] 
[] http://w3c-test.org/framework/app/suite

Other issues that emerged were that incorrect tests cause serious damage if implementors merely try to pass them, without regard to their correctness (this was said to have been a problem with ACID as discussed on Day 1).

There was a another inconclusive and lengthy discussion about test runners, and it was noted that having one test runner (harness) is good but having 2 or more is progressively worse. Further, it was noted that the ideal for a test runner is that it takes more than 1 min (for comprehensiveness) but less than 24 hours (for practicality).

It was noted that test suites simply don't exist for many things proposed for level 1. It was noted that the quality of existing tests had not been assessed. It was noted that defining tests, defining a framework for execution of tests and reporting the results of tests were separate endeavours all of which could be done in varying depths but in the end not all of which could be achieved by the group.

Discussion moved on to consider that the scope of work looked like a multi year effort which would not achieve the objective of decreasing fragmentation and increasing interoperability quickly.  It was suggested that the definition of Level 1 could be considered to be what had been achieved by December.

The group retired exhausted for lunch.

ACTION-28 - Write documentation for testharness.js [on Robin Berjon - due 2012-07-03].
ACTION-29 - Survey people and compile a list of common errors in test writing [on Josh Soref - due 2012-07-03].
ACTION-30 - Remove the dependency on Node to get Ringmark running, and help make it easier to set up [on Matt Kelly - due 2012-07-03]
ACTION-31 - Look into something like jsFiddle for test writing [on Robin Berjon - due 2012-07-03]
ACTION-32 - Provide requirements for an automated test runner of all tests [on Jean-Francois Moy - due 2012-07-03]
ACTION-34 - Talk to OEMs/carriers about what they would most usefully need to get out of Ringmark results [on Matt Kelly - due 2012-07-03]
ACTION-35 - Carry out a gap analysis of existing W3C test suites [on Tobie Langel - due 2012-07-03]
ACTION-36 - Draft a test suite release strategy based on what scribe and Josh_Soref described [on Robin Berjon - due 2012-07-03]
ACTION-37 - Assess which existing test suites can be reused and at what level of coverage they stand [on Robin Berjon - due 2012-07-03]

ISSUE-29 - What are the requirements for a test framework?
ISSUE-30 - Should the document track the testing effort or not

3.4 Short Term Goal of Group and Relationship of Level 1 to Testing

We returned refreshed, energised and with renewed purpose from lunch.

Jo gave a lengthy peroration [not uncommon, as it happens], noting that various chair discussions over lunch may have broken what appeared to be a log-jam. 

"We need concrete deliverables by the end of the year. Producing the Level 1 spec seems perfectly achievable. What's more problematic is the test framework and the tests that go with Level 1.

Fantasai made the point before lunch that the visual output produced by Ringmark is very appealing and we'd like our work to have the same appeal. But it doesn't have to be this CG that provides the visual respresentation. In fact, it would be preferable if we left that work to others so that we are providing an environment in which both headless and visual output  could be produced from the same environment. We'd need a proof by the end of the year that such output is possible, an existence proof, and in discussion Facebook say they are happy to adapt what they have to fit in with the environment to be provided. Others can also produce things that use the output of such a test runnner - browser vendors, for example might want to do something which was non visual.

We do need some tests to run. In fact lots of tests exist already but it's clear that a complete suite of tests won't be possible by the end of the year, far from it. We need to decouple the availability of the testing environment from the completeness of the tests to run in it. Completing the tests is something that will need to be addressed in phases that goes beyond the end of this year. We could say that whatever exists by the end of the year is the first such phase. By definition.

In addition, though, we need a gap analysis that identifies where there aren't tests that we think are needed and put a priority order on them. 

So what is the environment that the tests run in? Seems clear to me that the only viable option [applying Sutton's law, sort of] is the W3C framework which is where effort is concentrated. Facebook say they can adapt what they have to work with that - non-trivial but do-able.

We also need a document explaining the approach saying what the overall programme is, and where the first year's effort fits into it."

The following discussion was by way of explaining the above.

Robin explained that the existing system at http://w3c-test.org/ contains tests provided by various WGs and which contains a test runner that can be used to assess a browser by invoking them from the browser in question. There are also non-automatable tests whose results can also be stored by the person running the tests. Since some of the things we want to test are not automatable they can't be addressed by a Ringmark approach - but the results of the tests can be displayed in that kind of way, since they can be pulled from the database.

Further discussion resulted in further explanation, Robin again: We want two things, a) go to a page, it executes automated tests, tells you your browser sucks. Like what Ringmark does today. b) Pull results from either the W3C database or some private instance and display, or generate "buzzword compliant" report.

The idea is not to create limit how testing and reporting happen but to make a start at it. As Robert observed: Create a short-term hit list, and choose your own reporting and visualization. Data can be shared, people can contribute tests.

The CG wil focus on what needs to be worked on by the end of the year. Matt said that FB has 14 features in Ringmark that could form the basis. Robin proposed that it be called Hit List Zero.

Robin summarised as follows:

... - target: end of year 
... - Level 1 document 
... - this is the aspirational documentation of what developers 
... need to produce applications today 
... - specific test suite nice and visual 
... - this is pretty, can run atop testharness.js 
... - document for the specific test suite 
... - this is the subset of the Level 1 document that describes 
... the interoperability hitlist that we are targeting for the 
... current test release 
... - refactoring Ringmark to be able to place the visual component 
... atop results from a test run, or stored runs

RESOLUTION: the target for this group for EOY 2012 is the above summary
RESOLUTION: the primary input for Hit List Zero is the list of fourteen features currently focused upon by Ringmark
RESOLUTION: The group will not try to boil the ocean nor make a perfect system for the first release — which only care about rough consensus and running code

ACTION-38 - Make a fluffy picture out of the architecture described by Robin for the test system [on Tobie Langel - due 2012-07-03]
ACTION-39 - Draft the architecture of the test system [on Robin Berjon - due 2012-07-03]

3.5 Vendor Prefixes

The chairs agreed that it was not possible to discuss vendor prefixes because the proponent of the paper on the topic was not in attendance at the meeting. In the brief discussion that ensued (ignoring the chairs it would seem) the following tentative position emerged:

- The CSS WG is working on the problem
- Coremob tests won't have prefixes
- Implementers won't be punished for supporting them
- Implementers won't get credit for supporting the feature if they don't support the prefix-less property

3.6 Beyond Level 1

Tobie Langel noted that he's working on Use Cases and Requirements for Level 1 - the output of which he hopes to share with the group shortly.

3.7 QoI Testing [Reprise]

Formulating a methodology for QoI testing represents a level of work likely to be beyond what the group could reasonably expect to achieve inside this phase. Facebook and The Financial Times offered to contribute thinking on this topic to move discussion along.

RESOLUTION: For QoI testing, we're open to input, but we won't move on it before someone proposes something specific (FT & FB have tentatively suggested they might think about it)

3.8 Next Meetings

ACTION-40 - Check on hosting @Orange Oct 2-3, in London (alt Paris) [on Jean-Francois Moy - due 2012-07-03]
ACTION-41 - Figure out teleconference logistics, timing, and critical mass [on Jo Rabin - due 2012-07-03].

3.9 Acknowledgements

RESOLUTION: The CG thanks Facebook for great organisation, location, and logistics
RESOLUTION: The CG thanks Josh and fantasai for their outstanding scribing

Received on Saturday, 28 July 2012 18:41:26 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:16:22 UTC