RE: Temporal Functions in HTML5?

Thanks everyone for commenting on my questions about temporal functions.

The suggestions you've given are mostly related to animation functions. I need to check to be sure what is meant here as "animation" in that it could mean a traditional type of animation e.g. a sequence of graphic or image frames which run in a timed fashion to create the illusion of motion or it could mean the "bringing to life" of otherwise inanimate objects. The latter is more general and that's what I'd hope the definition to be instead of the traditional one. Maybe there are other possible definitions but I'll focus on these two.

The animation stuff which can be applied to cuing or triggering any media object, program or device in a way synchronized to some reference, could be generalized to do this for the types of functionality I referred to in my original email. If so, I would think that calling them "animation" functions would seem constraining.

Perhaps protocols could be developed to make temporal controls easier. Perhaps these could be modifications of existing, more vertical protocols with temporal features. I think that protocols alone would not suffice. There would need to be temporal components e.g. temporary clocks, access to shared or centralized clocks or other timing references, and perhaps some tags (at least some new parameters to existing tags.)

Please see my other comments, prefaced with +++, below.

"Track" seems synergistic with temporal function elements. 


Q me

-----Original Message-----
From: Silvia Pfeiffer [] 
Sent: Thursday, May 24, 2012 11:26 PM
Subject: Re: Temporal Functions in HTML5?

On Fri, May 25, 2012 at 3:25 AM, GAUSMAN, PAUL <> wrote:
> Does anyone find anything in the HTML5 documentation that addresses temporal issues?
> Examples would be:
> *       A time reference (absolute and/or relative,)

There is requestAnimationFrame, see . It is implemented in some
browsers and near release in others: . It provides timing
controls for script-based animations/

> *       Time-related cues and/or scripting,

The <track> element provides cues and scripting functionality in
relation to the timeline of a media element, see .

> *       A clock selection capability,

There is discussion about allowing other clocks than just the
wallclock timing for requestAnimationFrame, see .

> *       Independent clock functions (e.g. a clock instance just for an app or HTML5 based experience, with start/stop/reset/set capabilities,) timing parameters within tags, time driven push functions, bidirectional event timing functions (logging time between inbound and outbound events,)

There is also setTimeout and setInterval for timing events and the
execution of functions. There is no feature that I know of that times
the tags - and I don't quite understand the use case for it.

+++I was thinking that a web app could use this as one indicator of user intent or the like, e.g. quick responses may mean focus, slow may mean casual use and prolonged responses (minutes, hours, days) could mean low interest. This could enable an app to change "gears" to make an experience more meaningful to the user.

> *       Etc.,
> *       Anything that addressing timing in a multi-device, multi-app, multi-user experience framework.

These are all forms of distributed applications for which
synchronization is indeed hard. You will likely need protocols to
solve such synchornization issues - markup alone will not help.

+++Good thought. I believe that there should also be tags, parameters and core functionality to enable temporal aspects of any appropriate functionality.

> Supporting functions might include inter-object/inter-event/inter-device messaging and WebRTC interactive functions.

The WebRTC spec allows to exchange data as well as media streams
directly between peer browsers. You might want to check out the
PeerData API, see

+++I haven't read this yet but I'll comment that inter-browser/inter-app/inter-window/inter-object/inter-whatever capabilities are certainly part of this. While it's important to be able send triggers and other data between any entities, the intent of my comments are not only to make entities interact, but to seamlessly converge multiples of any/all entities into a natural (or unnatural if desired) user experience in real-time and real-space.

> Existing functionality which could use this includes Closed Captioning and playlist execution but these are just the tip of the tip of the iceberg compared to the potential emerging applications, like multi-device, multi-user, multi-location user experiences, VR, AR, etc.

Closed captions are a solved issue in HTML5. We have the <track>
element for it with @kind=captions, see .

+++Maybe other "@kind" values could be: clock, timecode, cue, playlist, HTML, etc. (Please excuse me if any of these already exist.)



Received on Friday, 25 May 2012 13:38:12 UTC