- From: Joe Cincotta <joe@pixolut.com>
- Date: Thu, 12 Apr 2012 10:28:47 +1000
- To: "public-coremob@w3.org" <public-coremob@w3.org>
- Cc: developers <developers@pixolut.com>
- Message-ID: <CAM4JFMF7TW4mx+gmFPFsWEP9R4cDQVhfXbTCuW1RXaUmvO530A@mail.gmail.com>
This is my first post on this group, so apologies for rushing in like a bull in a china shop... I think Ringmark as it stands is just missing the point. It's a nice idea, but it's an inconsistent abstraction of something that is already inconsistently implemented. All it serves to do is make things less clear for developers. This is not to say that the idea of creating a standardised benchmark platform using coremob is wrong, just abstracting the feature groups in to rings (or arbitrarily clustering them at all). If you look at the rings when you run the benchmark and see grey areas it doesn't really help you - it would make more sense to just remove the 'ring' part of the rendering on the benchmark and create a text list with commonly agreed names for each feature that is being tested in the suite and just render a green tick or red cross. A visual Pass/Fail of that test marks compliance of a specific feature for a specific browser implementation. (I do understand that the implementation of the TESTS is then the burning issue, but at least it isolates the real issue from the abstraction) The purpose of this then is of having an agreed set of feature implementation benchmarks for mobile browsers - it allows browser vendors freedom to implement features as they see fit and get immediate visibility in to compatibility (will our implementation work for the mobile web developers) - it also provides developers clarity on feature specific compatibility for their target browsers (will my code run on that browser). I can run the benchmark on a set of browsers that I want to target and check if the features I want to use are implemented. Easy. To extend on this idea, it would be great to allow browser vendors to then publish the results of the benchmarks as part of their release process (even as part of their nightly build process) to a central repository so that developers could refer to a central W3C location and search it without needing the browser OR the hardware. The database of test results would provide clarity on feature compatibility for specific browser versions - possibly even on a per device basis. In summary, if you want to cluster features, as a developer it makes sense to do that at your discretion, since your application will be using features in a unique way (do I really need level 1 compatibility if I use a single feature that IS implemented on browsers that are not fully 'ring 1 compliant'?). As a consumer, I doubt a users decision would EVER be based on 'ring' compatibility - so simplification down to a 'marketing' style visual representation of rings is kind of pointless for that too. I think we should definitely explore a standardised test suite as a high level determination of feature compatibility for mobile browsers and devices that is open... Regards Joe -- *______________________________* *Joe Cincotta, Managing Director *[image: Pixolüt] P +61.2.8517.5080 M +61.419.694030 W http://www.pixolut.com Follow Pixolüt on Facebook<http://www.facebook.com/pixolut/app_308343999186241> and Twitter <http://twitter.com/pixolut> Check out our work on Facebook Studio<http://facebook-studio.com/site/agencies/476> !
Received on Thursday, 12 April 2012 14:20:10 UTC