Re: Video Geotagging Protocol For Electronic Maps: Concept

Rob,

The Oblique Imagery DWG has only recently been reactivated, so the web presence is not yet available - should be in the coming weeks.

I’ll admit to not being an expert on MISB, but I have found the “handbook” to be very useful: http://www.gwg.nga.mil/misb/docs/misp/MISP-2018.1_Motion_Imagery_Handbook.pdf <http://www.gwg.nga.mil/misb/docs/misp/MISP-2018.1_Motion_Imagery_Handbook.pdf>. The most relevant standard is likely ST 0601.11: http://www.gwg.nga.mil/misb/docs/standards/ST0601.11.pdf <http://www.gwg.nga.mil/misb/docs/standards/ST0601.11.pdf>

There are also commercial systems that are relevant in scope. One with which I have worked comes from Red Hen Systems: https://www.redhensystems.com/ <https://www.redhensystems.com/>

Best Regards,
Scott

> On Jan 12, 2018, at 9:22 AM, Rob Smith <rob.smith@awayteam.co.uk> wrote:
> 
> Scott,
> 
> Thanks for your response.
> 
> WAMI appears to do almost the converse of what I’m proposing, as it allows a geolocation point or path to be chosen and displays an associated video feed from the wide-area coverage data. I’m proposing a protocol that allows a mobile video feed to be associated with a geolocation point or path and to present an electronic map which highlights particular features. WAMI is also aimed at high-end equipment, whereas this proposal targets low-end kit, e.g. mobile phones, for wider accessibility.
> 
> Many of the proposal’s use cases involve dash cams, helmet cams and drones, so there may be an overlap with the Oblique Imagery Domain Working Group. Please let me know the best way to raise this with them, assuming they’re interested in this topic, as I’m unable to see them listed on the OGC website: http://www.opengeospatial.org/projects/groups/wg <http://www.opengeospatial.org/projects/groups/wg>
> 
> I’ve had a look at the Motion Imagery Standards Board, as you suggested, and agree that we’re likely to share common interests. Do you have a particular MISB standard or working group in mind that relates to video geolocation?
> 
> Many thanks for your time, and I look forward to hearing from you.
> 
> Rob Smith
> 
> Away Team
> www.awayteam.co.uk <http://www.awayteam.co.uk/>
> 
>> On 9 Jan 2018, at 22:29, Scott Simmons <ssimmons@opengeospatial.org <mailto:ssimmons@opengeospatial.org>> wrote:
>> 
>> Hi Rob,
>> 
>> I recommend that you evaluate the specification for Wide Area Motion Imagery (WAMI) here:
>> https://portal.opengeospatial.org/files/?artifact_id=50486 <https://portal.opengeospatial.org/files/?artifact_id=50486>
>> 
>> This specification describes geotagging of frames from motion imagery as well as methods for access and service.
>> 
>> We also have rekindled the Oblique Imagery Domain Working Group in the OGC. This group is focused on the geospatial tagging and use of images and videos NOT taken from directly overhead.
>> 
>> You may also want to take a look at the work of the Motion Imagery Standards Board as they have been working similar problems for many years (granted with equipment and sensors that are generally highly-capable):
>> http://www.gwg.nga.mil/misb/ <http://www.gwg.nga.mil/misb/>
>> 
>> Best Regards,
>> Scott
>> 
>> Scott Simmons
>> Executive Director, Standards Program
>> Open Geospatial Consortium (OGC)
>> tel +1 970 682 1922
>> mob +1 970 214 9467
>> ssimmons@opengeospatial.org <mailto:ssimmons@opengeospatial.org>
>> 
>> The OGC: Making Location Count…
>> www.opengeospatial.org <http://www.opengeospatial.org/>
>> 
>> 
>> 
>> 
>>> On Jan 9, 2018, at 11:04 AM, Rushforth, Peter (NRCan/RNCan) <peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca>> wrote:
>>> 
>>> Hi Rob,
>>> 
>>> From: Rob Smith [mailto:rob.smith@awayteam.co.uk <mailto:rob.smith@awayteam.co.uk>] 
>>> Sent: January 9, 2018 10:22 AM
>>> To: Rushforth, Peter (NRCan/RNCan) <peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca>>
>>> Cc: public-sdwig@w3.org <mailto:public-sdwig@w3.org>
>>> Subject: Re: Video Geotagging Protocol For Electronic Maps: Concept
>>> 
>>> 
>>>> However, I notice that your example exclusively uses viewport- or screen-based co-ordinates: http://maps4html.github.io/Web-Map-Custom-Element/blog/progressive-web-maps.html#toc-show <http://maps4html.github.io/Web-Map-Custom-Element/blog/progressive-web-maps.html#toc-show>. 
>>> 
>>> In the example you pointed to, I'm trying to demonstrate the progressive enhancement possibilities for the <map> and <area> elements, such that the HTML standard could, if it chose to do so, extend the semantics of those elements such that a fallback behaviour is (almost) inherent for these elements in 'future browsers'.  As such, screen/viewport coordinates are necessary in the child <area> elements, since these are the expected and standard semantics for the area@coords attribute today.  I imagine that there may be objections to extending the behaviour of the <map> element, from what it does today, to implement modern web maps.  However I think that discussion / debate is warranted and worth having, because a) existing so-called 'client-side' image maps with areas are in fact very similar in concept to modern web maps with features - essentially a 2D image (which is fetched) on which vector data is drawn and symbolized, and b) to avoid confusion and duplicative concepts in future HTML, we should first think whether a new element has an existing antecedent or competing
>>> 
>>>> Does it also support (lat,long) co-ordinates?
>>> 
>>> Yes, this the "WGS84" 'projection'.    For example, although it is a simple example: http://maps4html.github.io/Web-Map-Custom-Element/html-author-mapml-content.html <http://maps4html.github.io/Web-Map-Custom-Element/html-author-mapml-content.html> if you view-source and look at the content of the 'Biodome' layer, you can see an example of a (point) feature in-line to the layer- element.  Note that the axis order defined by MapML is (longitude, latitude). Currently the client supports point, line and polygon, with the intention to extend support to the standard multi-part features as well.  If you turn on the 'Canvec' layer in the example here: http://geogratis.gc.ca/mapml/client/map-carte.html <http://geogratis.gc.ca/mapml/client/map-carte.html> (and I imagine wait a while, due to non-optimization of features and the cross-oceanic link) you can see many examples of different types of features being rendered (it's a simulation that uses SVG under the hood).
>>> 
>>> This is a prototype.  In testbed 14, I hope we will be able to arrive at an improved model for features, by relying more on existing HTML markup with microdata attributes for feature properties.  
>>> 
>>>> I want video map annotation to be capable but lightweight, in a similar fashion to WebVTT subtitles, and am keen to avoid unnecessary complexity. 
>>> +1
>>> 
>>>> Is MapML intended for map annotation, HUD display, UI construction, or all of these?
>>> 
>>> The first objective is to be able to represent "a lot" of existing spatial data and services as MapML without extensive conversion, so that HTML applications could become the first client of MapML.  Once accomplished, we can certainly think about more advanced applications.  Implicit in this objective is to be able to describe a 2D map area, integrating features, images and tiles.  Animation via CSS and / or other declarative means is in scope, I think.  
>>> 
>>>> An HTML5 video DOM object generates around four time updates per second and doesn’t generate ’new frame’ events, which indicates the level of detail intended. 
>>> Did not know that, but perhaps such events' locations could be captured as WGS84 coordinate pairs, and rendered as a line segment feature.  Perhaps an animation could move the pointy part of a feature balloon to point to the appropriate captured vertex as the video played, or something like that. That animation might also have to pan and or  zoom the map depending on the altitude of the video.  
>>> 
>>> We have discussed (a wee bit) the idea of adding <form> with <input type="line|point|polygon|etc"...> to MapML so that the map user could select that input and capture the feature using the input device.  Maybe  the video feed from a drone could be used to provide that input?  I suppose that would depend on the information available from the drone sensors.
>>> 
>>>> There’s a danger of overloading the web browser with too many updates, which would result in a bad user experience, particularly in Javascript. That said, the protocol could be used for frame-by-frame accuracy if required, though a dedicated video suite would probably be a more suitable viewer than a web browser for that specific purpose.
>>> 
>>> Perhaps.  Web browsers are a (very) capable platforms these days, especially with the incipient standard of WebAssembly on the horizon.  I imagine that a MapML document that had embedded JavaScript, which was itself the browsing context (of a hypothetical future browser), might be able filter frame events to provide reasonably sized spatial features for the drone feed in your use case (which is a great use case, BTW).
>>> 
>>>> A use case example may help to illustrate the intention more clearly - see below - but the key element is that a human viewer can better interpret the footage, either at the time or after the event.
>>> 
>>>> Use Case: Coastguard/Mountain Rescue
>>> 
>>> 
>>> Cheers,
>>> Peter
>>> 
>>> 
>>> Peter Rushforth
>>> 
>>> Technology Advisor
>>> Canada Centre for Mapping and Earth Observation / Earth Sciences Sector
>>> Natural Resources Canada / Government of Canada
>>> peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca> / Tel: 613-759-7915
>>> 
>>> Conseiller technique
>>> Centre canadien de cartographie et d’observation de la Terre / Secteur des sciences de la Terre
>>> Ressources naturelles Canada / Gouvernement du Canada
>>> peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca> / Tél: 613-759-7915
>>> 
>>> 
>>> 
>> 
> 
> 
> 
> Begin forwarded message:
> 
>> From: Scott Simmons <ssimmons@opengeospatial.org <mailto:ssimmons@opengeospatial.org>>
>> Subject: Re: Video Geotagging Protocol For Electronic Maps: Concept
>> Date: 9 January 2018 at 22:29:02 GMT
>> To: "Rushforth, Peter (NRCan/RNCan)" <peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca>>
>> Cc: Rob Smith <rob.smith@awayteam.co.uk <mailto:rob.smith@awayteam.co.uk>>, "public-sdwig@w3.org <mailto:public-sdwig@w3.org>" <public-sdwig@w3.org <mailto:public-sdwig@w3.org>>
>> 
>> Hi Rob,
>> 
>> I recommend that you evaluate the specification for Wide Area Motion Imagery (WAMI) here:
>> https://portal.opengeospatial.org/files/?artifact_id=50486 <https://portal.opengeospatial.org/files/?artifact_id=50486>
>> 
>> This specification describes geotagging of frames from motion imagery as well as methods for access and service.
>> 
>> We also have rekindled the Oblique Imagery Domain Working Group in the OGC. This group is focused on the geospatial tagging and use of images and videos NOT taken from directly overhead.
>> 
>> You may also want to take a look at the work of the Motion Imagery Standards Board as they have been working similar problems for many years (granted with equipment and sensors that are generally highly-capable):
>> http://www.gwg.nga.mil/misb/ <http://www.gwg.nga.mil/misb/>
>> 
>> Best Regards,
>> Scott
>> 
>> Scott Simmons
>> Executive Director, Standards Program
>> Open Geospatial Consortium (OGC)
>> tel +1 970 682 1922
>> mob +1 970 214 9467
>> ssimmons@opengeospatial.org <mailto:ssimmons@opengeospatial.org>
>> 
>> The OGC: Making Location Count…
>> www.opengeospatial.org <http://www.opengeospatial.org/>
>> 
>> 
>> 
>> 
>>> On Jan 9, 2018, at 11:04 AM, Rushforth, Peter (NRCan/RNCan) <peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca>> wrote:
>>> 
>>> Hi Rob,
>>> 
>>> From: Rob Smith [mailto:rob.smith@awayteam.co.uk <mailto:rob.smith@awayteam.co.uk>] 
>>> Sent: January 9, 2018 10:22 AM
>>> To: Rushforth, Peter (NRCan/RNCan) <peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca>>
>>> Cc: public-sdwig@w3.org <mailto:public-sdwig@w3.org>
>>> Subject: Re: Video Geotagging Protocol For Electronic Maps: Concept
>>> 
>>> 
>>>> However, I notice that your example exclusively uses viewport- or screen-based co-ordinates: http://maps4html.github.io/Web-Map-Custom-Element/blog/progressive-web-maps.html#toc-show <http://maps4html.github.io/Web-Map-Custom-Element/blog/progressive-web-maps.html#toc-show>. 
>>> 
>>> In the example you pointed to, I'm trying to demonstrate the progressive enhancement possibilities for the <map> and <area> elements, such that the HTML standard could, if it chose to do so, extend the semantics of those elements such that a fallback behaviour is (almost) inherent for these elements in 'future browsers'.  As such, screen/viewport coordinates are necessary in the child <area> elements, since these are the expected and standard semantics for the area@coords attribute today.  I imagine that there may be objections to extending the behaviour of the <map> element, from what it does today, to implement modern web maps.  However I think that discussion / debate is warranted and worth having, because a) existing so-called 'client-side' image maps with areas are in fact very similar in concept to modern web maps with features - essentially a 2D image (which is fetched) on which vector data is drawn and symbolized, and b) to avoid confusion and duplicative concepts in future HTML, we should first think whether a new element has an existing antecedent or competing
>>> 
>>>> Does it also support (lat,long) co-ordinates?
>>> 
>>> Yes, this the "WGS84" 'projection'.    For example, although it is a simple example: http://maps4html.github.io/Web-Map-Custom-Element/html-author-mapml-content.html <http://maps4html.github.io/Web-Map-Custom-Element/html-author-mapml-content.html> if you view-source and look at the content of the 'Biodome' layer, you can see an example of a (point) feature in-line to the layer- element.  Note that the axis order defined by MapML is (longitude, latitude). Currently the client supports point, line and polygon, with the intention to extend support to the standard multi-part features as well.  If you turn on the 'Canvec' layer in the example here: http://geogratis.gc.ca/mapml/client/map-carte.html <http://geogratis.gc.ca/mapml/client/map-carte.html> (and I imagine wait a while, due to non-optimization of features and the cross-oceanic link) you can see many examples of different types of features being rendered (it's a simulation that uses SVG under the hood).
>>> 
>>> This is a prototype.  In testbed 14, I hope we will be able to arrive at an improved model for features, by relying more on existing HTML markup with microdata attributes for feature properties.  
>>> 
>>>> I want video map annotation to be capable but lightweight, in a similar fashion to WebVTT subtitles, and am keen to avoid unnecessary complexity. 
>>> +1
>>> 
>>>> Is MapML intended for map annotation, HUD display, UI construction, or all of these?
>>> 
>>> The first objective is to be able to represent "a lot" of existing spatial data and services as MapML without extensive conversion, so that HTML applications could become the first client of MapML.  Once accomplished, we can certainly think about more advanced applications.  Implicit in this objective is to be able to describe a 2D map area, integrating features, images and tiles.  Animation via CSS and / or other declarative means is in scope, I think.  
>>> 
>>>> An HTML5 video DOM object generates around four time updates per second and doesn’t generate ’new frame’ events, which indicates the level of detail intended. 
>>> Did not know that, but perhaps such events' locations could be captured as WGS84 coordinate pairs, and rendered as a line segment feature.  Perhaps an animation could move the pointy part of a feature balloon to point to the appropriate captured vertex as the video played, or something like that. That animation might also have to pan and or  zoom the map depending on the altitude of the video.  
>>> 
>>> We have discussed (a wee bit) the idea of adding <form> with <input type="line|point|polygon|etc"...> to MapML so that the map user could select that input and capture the feature using the input device.  Maybe  the video feed from a drone could be used to provide that input?  I suppose that would depend on the information available from the drone sensors.
>>> 
>>>> There’s a danger of overloading the web browser with too many updates, which would result in a bad user experience, particularly in Javascript. That said, the protocol could be used for frame-by-frame accuracy if required, though a dedicated video suite would probably be a more suitable viewer than a web browser for that specific purpose.
>>> 
>>> Perhaps.  Web browsers are a (very) capable platforms these days, especially with the incipient standard of WebAssembly on the horizon.  I imagine that a MapML document that had embedded JavaScript, which was itself the browsing context (of a hypothetical future browser), might be able filter frame events to provide reasonably sized spatial features for the drone feed in your use case (which is a great use case, BTW).
>>> 
>>>> A use case example may help to illustrate the intention more clearly - see below - but the key element is that a human viewer can better interpret the footage, either at the time or after the event.
>>> 
>>>> Use Case: Coastguard/Mountain Rescue
>>> 
>>> 
>>> Cheers,
>>> Peter
>>> 
>>> 
>>> Peter Rushforth
>>> 
>>> Technology Advisor
>>> Canada Centre for Mapping and Earth Observation / Earth Sciences Sector
>>> Natural Resources Canada / Government of Canada
>>> peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca> / Tel: 613-759-7915
>>> 
>>> Conseiller technique
>>> Centre canadien de cartographie et d’observation de la Terre / Secteur des sciences de la Terre
>>> Ressources naturelles Canada / Gouvernement du Canada
>>> peter.rushforth@canada.ca <mailto:peter.rushforth@canada.ca> / Tél: 613-759-7915
>>> 
>>> 
>>> 
>> 

Received on Friday, 12 January 2018 17:59:55 UTC