RE: Video Geotagging Protocol For Electronic Maps: Concept

Hi Rob,

From: Rob Smith [] 
Sent: January 9, 2018 10:22 AM
To: Rushforth, Peter (NRCan/RNCan) <>
Subject: Re: Video Geotagging Protocol For Electronic Maps: Concept

> However, I notice that your example exclusively uses viewport- or screen-based co-ordinates: 

In the example you pointed to, I'm trying to demonstrate the progressive enhancement possibilities for the <map> and <area> elements, such that the HTML standard could, if it chose to do so, extend the semantics of those elements such that a fallback behaviour is (almost) inherent for these elements in 'future browsers'.  As such, screen/viewport coordinates are necessary in the child <area> elements, since these are the expected and standard semantics for the area@coords attribute today.  I imagine that there may be objections to extending the behaviour of the <map> element, from what it does today, to implement modern web maps.  However I think that discussion / debate is warranted and worth having, because a) existing so-called 'client-side' image maps with areas are in fact very similar in concept to modern web maps with features - essentially a 2D image (which is fetched) on which vector data is drawn and symbolized, and b) to avoid confusion and duplicative concepts in future HTML, we should first think whether a new element has an existing antecedent or competing

> Does it also support (lat,long) co-ordinates?

Yes, this the "WGS84" 'projection'.    For example, although it is a simple example: if you view-source and look at the content of the 'Biodome' layer, you can see an example of a (point) feature in-line to the layer- element.  Note that the axis order defined by MapML is (longitude, latitude). Currently the client supports point, line and polygon, with the intention to extend support to the standard multi-part features as well.  If you turn on the 'Canvec' layer in the example here: (and I imagine wait a while, due to non-optimization of features and the cross-oceanic link) you can see many examples of different types of features being rendered (it's a simulation that uses SVG under the hood).

This is a prototype.  In testbed 14, I hope we will be able to arrive at an improved model for features, by relying more on existing HTML markup with microdata attributes for feature properties.  

> I want video map annotation to be capable but lightweight, in a similar fashion to WebVTT subtitles, and am keen to avoid unnecessary complexity. 

> Is MapML intended for map annotation, HUD display, UI construction, or all of these?

The first objective is to be able to represent "a lot" of existing spatial data and services as MapML without extensive conversion, so that HTML applications could become the first client of MapML.  Once accomplished, we can certainly think about more advanced applications.  Implicit in this objective is to be able to describe a 2D map area, integrating features, images and tiles.  Animation via CSS and / or other declarative means is in scope, I think.  

> An HTML5 video DOM object generates around four time updates per second and doesn’t generate ’new frame’ events, which indicates the level of detail intended. 
Did not know that, but perhaps such events' locations could be captured as WGS84 coordinate pairs, and rendered as a line segment feature.  Perhaps an animation could move the pointy part of a feature balloon to point to the appropriate captured vertex as the video played, or something like that. That animation might also have to pan and or  zoom the map depending on the altitude of the video.  

We have discussed (a wee bit) the idea of adding <form> with <input type="line|point|polygon|etc"...> to MapML so that the map user could select that input and capture the feature using the input device.  Maybe  the video feed from a drone could be used to provide that input?  I suppose that would depend on the information available from the drone sensors.

> There’s a danger of overloading the web browser with too many updates, which would result in a bad user experience, particularly in Javascript. That said, the protocol could be used for frame-by-frame accuracy if required, though a dedicated video suite would probably be a more suitable viewer than a web browser for that specific purpose.

Perhaps.  Web browsers are a (very) capable platforms these days, especially with the incipient standard of WebAssembly on the horizon.  I imagine that a MapML document that had embedded JavaScript, which was itself the browsing context (of a hypothetical future browser), might be able filter frame events to provide reasonably sized spatial features for the drone feed in your use case (which is a great use case, BTW).

> A use case example may help to illustrate the intention more clearly - see below - but the key element is that a human viewer can better interpret the footage, either at the time or after the event.

> Use Case: Coastguard/Mountain Rescue


Peter Rushforth

Technology Advisor
Canada Centre for Mapping and Earth Observation / Earth Sciences Sector
Natural Resources Canada / Government of Canada / Tel: 613-759-7915

Conseiller technique
Centre canadien de cartographie et d’observation de la Terre / Secteur des sciences de la Terre
Ressources naturelles Canada / Gouvernement du Canada / Tél: 613-759-7915

Received on Tuesday, 9 January 2018 18:04:44 UTC