[Cloud Browser] initial concept

Hi Cloud Browser TF,

I started to create a use case as discussed in the last call (see: https://www.w3.org/2011/webtv/wiki/Main_Page/Cloud_Browser_TF/UseCases/MSE ). Unfortunately it is quite hard to come up with use cases because it depends a lot of what we like to achieve. I believe the concept is not straightforward (in contrast with other task forces) and i am not sure if we make it more tangible by creating use cases. Therefore i created a initial concept, just to have something to talk about in the next call and by all means, it is not intended as final work. You could find it here:


In short: you have a runtime environment (rte). The rte could initialise a cloud browser which is a regular browser only processed remote (hence cloud browser). The cloud browser lives within a cloud environment, which is responsible for creating, terminating, etc of the cloud browsers. The browser is responsible for the browsing context and rendering it to the screen. The latter is done in the rte which is send as a stream from the cloud environment. The rte should also able to process out-of-band media and combine this with the previous mentioned stream. This could be needed due to infrastructure constraints. An example could be a pvr attached to device executing the runtime environment. You wouldn't send the media from the pvr first to the cloud. The same issue could be the case with a vod pump which (in case of broadcast) need a tuner which is located next to the runtime environment, locally.

We should discuss what should be scope for this task force. Personally i think we have 3 main concerns:

- define the gaps in the rte
- signalling (although we may decide that this should be agnostic. much like the webRTC signalling)
- the stream to the rte so-that multiple cloud browser vendors are compatible with each other

I jotted down potential outcome to make things a bit more concrete:

The runtime environment is a lightweight local browser which implements the (abstract) Media Capture and Streams specification ( https://www.w3.org/TR/mediacapture-streams/ ) or (more specific) webRTC specification ( http://www.w3.org/TR/webrtc/ ). This could initialise the cloud browser and receive a stream. The transport and signalling could be agnostic as it is for example with attaching a camera device. Though it would be beneficial to standardise or leverage specification to enhance interoperability among implementations. For example signalling could be done with something like the Javascript Session Establishment Protocol ( http://tools.ietf.org/html/draft-ietf-rtcweb-jsep-12 ) and the stream formats could be constrained to the byte streams specified for the Media Source Extensions ( https://w3c.github.io/media-source/byte-stream-format-registry.html ) or WebRTC Video Processing and Codec Requirements ( https://www.ietf.org/id/draft-ietf-rtcweb-video-06.txt ).

Hope this is useful. Let's discuss this further in the next call.


Colin Meerveld

Received on Wednesday, 27 January 2016 16:17:36 UTC