W3C home > Mailing lists > Public > public-media-capture@w3.org > June 2012

Re: Barcode Scanning scenario for media capture requirements

From: Harald Alvestrand <harald@alvestrand.no>
Date: Mon, 04 Jun 2012 10:29:25 +0200
Message-ID: <4FCC71E5.6030401@alvestrand.no>
To: "SULLIVAN, BRYAN L" <bs3131@att.com>
CC: Charles Pritchard <chuck@jumis.com>, "public-media-capture@w3.org" <public-media-capture@w3.org>
On 06/03/2012 11:16 PM, SULLIVAN, BRYAN L wrote:
> I like the idea. I had not thought of this as a preprocessing scenario, but the objective is similar in principle to face recognition, so it makes sense. And if it prevents the app from having to control the dependent camera functions it's definitely simpler for developers.

I definitely don't like the idea of specifying this as something that 
has to be specified in detail before publishing version 1.0 of this spec.

I smell ratholes.

>
> Thanks,
> Bryan Sullivan
>
> -----Original Message-----
> From: Charles Pritchard [mailto:chuck@jumis.com]
> Sent: Sunday, June 03, 2012 11:47 AM
> To: SULLIVAN, BRYAN L
> Cc: public-media-capture@w3.org
> Subject: Re: Barcode Scanning scenario for media capture requirements
>
> On 6/1/2012 3:14 PM, SULLIVAN, BRYAN L wrote:
>> I have some additional requirements from the Barcode Scanning use case that has been discussed recently on the DAP and Sysapps lists.
>>
>> (suggested new section in http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html)
>>
>> 2.7 Barcode scanning
>> Users discover and access barcodes in many different contexts, e.g. product packages, print media, posters, and billboards. Because code interpretation in all these contexts depends upon capturing of a usable image, the camera access API needs to enable the Webapp or user to select options that will enhance the quality of the image.
> I'd like to see detection of barcodes discussed as a Pre-processing
> (section 5.5) scenario: "Pre-processing scenarios will require the UAs
> to provide an implementation".
>
> The pseudo-code following example fits well with pre-processing while
> avoiding the requirement of a camera controller API.
>
> Sample events:
> {type: face, confidence: .6 }
> {type: barcode, confidence: .9 }
> {type: gesture, confidence, .5 }
>
> Work-flow:
>
> navigator.getUserMedia({video: true, recognition: true}, doVideo)
> doVideo().onrecognition(e) {
>     if(e.type == 'barcode') {
>       copyVideoFrame(e.mediaStream);
>       navigator.recognize(e, {autofocus: true, timeout: 2000, network:
> false});
>     }
> }
>
> Technical requirements:
>
> 1. Power efficiency for mobile devices:
> By having the UA do basic recognition, we save a lot of cycles in the
> scripting environment. (see also 5.6.2; performance)
> The author can save additional cycles by turning recognition off when
> it's not needed.
>
> 2. Control over side effects:
> Instead of the UA granting the author permission to do things like zoom
> and turn on a light,  the author grants those permissions to the UA.
>
> 2a. autofocus would cover all available means the device has to better
> recognize the event type (light, zoom, focus, etc);
> 2b. timeout limits how long the UA has control of the device and/or the
> time it has to complete enhanced recognition.
> 2c. network limits the implementation from sending data over the network
> to recognition servers. (see also 5.1.1; privacy).
>
>
> Draft scenario:
>
> Point out a barcode.
>
> Alice brings a print-out to a kiosk, the print-out contains 2d bar codes
> next to each product.
> She wants to learn more about a particular product and so she simply
> points at the product.
>
> Recognition identifies her, her gesture and the barcodes on the page;
> scripting processes the barcode closest to her finger.
>
> -Charles
>
>
Received on Monday, 4 June 2012 08:29:57 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:24:35 UTC