- From: cynthia <notifications@github.com>
- Date: Thu, 25 May 2017 09:30:12 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3ctag/design-reviews/issues/176/304055952@github.com>
>From a quick skim, it seems like this is a wrapper spec for this API: https://developers.google.com/vision/ This seems like a interesting addition to the platform, but it also seems a bit risky in terms of implementation consistency across different UAs - and for platforms without a native API to wrap against, this would mean implementing it within the browser. For consistency reasons, having a reference native library implementation would probably make the adoption across implementations smoother. The "fast mode" bit only notes using some form of a speed-accuracy tradeoff algorithm, which I think amplifies the feature consistency risk even further. Whether this would be a issue in practice (e.g. Browser X detects faces better than browser Y) is unclear - it does seem like it could potentially make content developers perform sniffing and redirect to proven/tested implementations given detection performance is worse on certain implementations. One bug I noticed was in QR codes, which according to the specification (the canonical standard from Denso Wave) can contain binary data, so a string type may not be appropriate for the raw data. It seems more natural for features like what is defined in this spec exist as a library rather than a built-in feature. However, the web is missing is the raw building blocks (e.g. BLAS) for scientific computing, which does make me wonder if that is what the platform will need for such libraries to exist. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/w3ctag/design-reviews/issues/176#issuecomment-304055952
Received on Thursday, 25 May 2017 16:31:02 UTC