RE: Barcode Scanning scenario for media capture requirements

I like the idea. I had not thought of this as a preprocessing scenario, but the objective is similar in principle to face recognition, so it makes sense. And if it prevents the app from having to control the dependent camera functions it's definitely simpler for developers.

Thanks,
Bryan Sullivan 

-----Original Message-----
From: Charles Pritchard [mailto:chuck@jumis.com] 
Sent: Sunday, June 03, 2012 11:47 AM
To: SULLIVAN, BRYAN L
Cc: public-media-capture@w3.org
Subject: Re: Barcode Scanning scenario for media capture requirements

On 6/1/2012 3:14 PM, SULLIVAN, BRYAN L wrote:
> I have some additional requirements from the Barcode Scanning use case that has been discussed recently on the DAP and Sysapps lists.
>
> (suggested new section in http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html)
>
> 2.7 Barcode scanning
> Users discover and access barcodes in many different contexts, e.g. product packages, print media, posters, and billboards. Because code interpretation in all these contexts depends upon capturing of a usable image, the camera access API needs to enable the Webapp or user to select options that will enhance the quality of the image.

I'd like to see detection of barcodes discussed as a Pre-processing 
(section 5.5) scenario: "Pre-processing scenarios will require the UAs 
to provide an implementation".

The pseudo-code following example fits well with pre-processing while 
avoiding the requirement of a camera controller API.

Sample events:
{type: face, confidence: .6 }
{type: barcode, confidence: .9 }
{type: gesture, confidence, .5 }

Work-flow:

navigator.getUserMedia({video: true, recognition: true}, doVideo)
doVideo().onrecognition(e) {
   if(e.type == 'barcode') {
     copyVideoFrame(e.mediaStream);
     navigator.recognize(e, {autofocus: true, timeout: 2000, network: 
false});
   }
}

Technical requirements:

1. Power efficiency for mobile devices:
By having the UA do basic recognition, we save a lot of cycles in the 
scripting environment. (see also 5.6.2; performance)
The author can save additional cycles by turning recognition off when 
it's not needed.

2. Control over side effects:
Instead of the UA granting the author permission to do things like zoom 
and turn on a light,  the author grants those permissions to the UA.

2a. autofocus would cover all available means the device has to better 
recognize the event type (light, zoom, focus, etc);
2b. timeout limits how long the UA has control of the device and/or the 
time it has to complete enhanced recognition.
2c. network limits the implementation from sending data over the network 
to recognition servers. (see also 5.1.1; privacy).


Draft scenario:

Point out a barcode.

Alice brings a print-out to a kiosk, the print-out contains 2d bar codes 
next to each product.
She wants to learn more about a particular product and so she simply 
points at the product.

Recognition identifies her, her gesture and the barcodes on the page; 
scripting processes the barcode closest to her finger.

-Charles

Received on Sunday, 3 June 2012 21:18:13 UTC