W3C home > Mailing lists > Public > public-web-and-tv@w3.org > August 2018

Minutes from Media & Entertainment IG call, 7 August 2018

From: Chris Needham <chris.needham@bbc.co.uk>
Date: Fri, 10 Aug 2018 15:04:02 +0000
To: "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
Message-ID: <590FCC451AE69B47BFB798A89474BB363E15D974@bgb01xud1006>
Hi all,

The minutes from the last Interest Group call on Tuesday 7th August are available [1], and copied below. Thanks again to Lucas Pardue for presenting.

Our next call is Tuesday 4th September, where we're planning to discuss browser testing for media devices. Details to be announced nearer the time.

Kind regards,

Chris (Co-chair, W3C Media & Entertainment Interest Group)

[1] https://www.w3.org/2018/08/07-me-minutes.html


---

W3C
- DRAFT -

Media and Entertainment IG

07 Aug 2018

Attendees

Present

    Francois_Daoust, Kaz_Ashimura, Chris_Needham, Ben_Poor, Giri_Mandyam, Lucas_Pardue, Mark_Vickers, Peter_Thatcher, Steve_Morris, Tatsuya_Igarashi, Will_Law, Masaru_Takechi, Stefan_Pham, Nigel_Megitt, Kazuhiro_Hoya, John_Luther, gmandyam

Regrets

Chair

    Chris, Mark, Igarashi

Scribe

    cpn

Contents

  Topics
    Introduction
    Scalable media delivery on the Web with HTTP Server Push
    Motivation
    What about IP multicast?
    Unidirectional HTTP flows
    API Proposal
    Multicast HTTP/QUIC to the Browser
    Wrap Up
    Next call

Summary of Action Items
Summary of Resolutions

Introduction

<kaz> scribenick: cpn

Chris: I've invited my colleague Lucas to present today on some work we've been doing at IETF and API gaps in the Web platform.

Scalable media delivery on the Web with HTTP Server Push

Lucas: This talk was presented at the recent Web5G workshop, and we wanted to follow up in the M&E IG.
... This is an updated version of the talk I gave there.

<kaz> fyi. Web5G Workshop Report: https://www.w3.org/2017/11/web5g-workshop/report.html


<kaz> Lucas's slides: https://www.w3.org/2011/webtv/wiki/images/5/53/BBC_R%26D_Scalable_media_delivery_on_the_Web_with_HTTP_Server_Push_W3C_Media_and_Entertainment.pdf


Motivation

Lucas: (Slide 3) The BBC is a broadcaster, life used to be simple, fixed cost terrestrial transmission network.
... The internet doesn't work like that. iPlayer usage is increasing.
... (Slide 4) We publish statistics on usage. 272 on demand programmes per month, popular content, Blue Planet 2
... Cost increases with popularity.
... (Slide 5) We've had discussions with people in other markets. Live viewing in the UK is 85%, the rest is time shift or on-demand.
... You can see the graph shows a small 2-3% increase aligned with the World Cup, or Wimbledon.
... This shows linear TV is popular, especially for cultural events. One of the largest events to date was the England vs Tunisia football game at the World Cup 2018.
... (Slide 6) The consuption over traditional broadcast was 18 million, and 3 million on the iPlayer.
... The peaks for internet streaming are different for live.
... What I'm showing here is the potential for live streaming over IP
... (Slide 7) The nature of the content is changing, the move to HD, UHD, HDR.
... Also new content experiences, 360 video, VR, AR. These all require more bits, hence increasing CDN costs.
... (Slide 8) Frequency spectrum for broadcast TV distribution is being reduced, for use by 4G or 5G mobile.
... We didn't provide UHD over broadcast, not enough bandwidth.
... What happens when broadcast goes away?
... (Slide 9) We are chartered to reach 98% of the population, this includes large events.
... We're looking at an all IP future, how to do without breaking the bank?
... 50x increase in load, so it's much higher bitrate. the HD bit rate is 5-8 mbit/sec. We did live UHD at 36 mbit/sec.

What about IP multicast?

Lucas: (Slide 11) How to solve this challenge? Can we use IP multicast?
... Layer 3 packet replication, use the inherent capability of network equipment rather than duplicate packets
... IPTV is well deployed today. Works well in managed networks or vertical deployments.
... For example, BT or Sky can manage their networks.
... But we also need to consider OTT.
... 3GPP has MBMS technology.
... (Slide 12) Streaming over HTTP is increasingly embraced in the industry.
... Many kinds of networks and client devices.
... Success driven by use of existing network technologies and CDNs.
... This is all familiar ground in this group, but I cover it as I'm laying the foundation for the next part.
... (Slide 13) Adaptive streaming, aligning to MPEG-DASH, smooth streaming, HLS.
... But we're talking about multicast here, want to take HTTP based media and deliver over multicast
... (Slide 14) Other things out there: 3GPP MBMS, ABR Multicast on CATV networks
... DVB group working on commercial requirements, then technical task force. Reference architecture in DVB BlueBook A176.
... It allows distribution of HTTP resources (files), with error recovery, loss detection and repair.
... Group is now looking at technical solutions
... (Slide 15) Possibly multicast transports, such as NORM, FLUTE, ROUTE, and BBC is working on multicast HTTP/QUIC.
... This is unidirectional delivery of HTTP resources.
... We want to receive files, reconstruct in a way a standard DASH client can play them.
... Different strategies to achieve this. Metadata descriptions for reconstruction.
... Importantly, it's a push mode delivery, having gone first through a discovery phase.
... Can send different bitrates to different mutlicast groups
... Clients are dependent on another agent external, to do the multicast reception. It could be on a home router or further up.
... This is in the DVB Bluebook.
... What would it look like if we had unicast delivery integrated into the Web platform? So we don't need the upstream box.

Unidirectional HTTP flows

Lucas: (Slide 17) We have an HTTP origin server, with a pull mode DASH streaming session. GET requests to receive segments.
... Also a multicast sender, we call it Multicast HTTP to push to a web application.
... Open a Web Socket connection, wait for events to be pushed. Those events could be DASH segments with their metadata.
... (Slide 18) But we don't think that's the only use case for the push mode distrubution.
... Scalable media distribution, low latency distribution. These still reqiuire the client to know what to ask for.
... Server push can reduce latency, it's a possibility, needs more work to investigate.
... Sending notifications, eg., server sent events for textual content. What about other kinds of content?
... You can do some of this today. We've tried using Web Sockets to reduce latency for notification of updates, e.g, score updates on our Sports pages. It's costly for us to do at large scale.
... (Slide 19) A CDN infrastructure geared more towards push could be helpful. But an API is needed.
... Existing APIs aren't good for pushing web resources to JavaScript code. Resources with URLs with standard metadata.
... Same origin and secure context requirements.
... Let the UA do the heavy lifting, parse, decode, etc.
... Making this available in different contexts, use familiar paradigms.
... (Slide 20) Comparison of push mode APIs: HTTP Server Push, Server Sent Events, Push API, Web Sockets.
... This is a way for us to compare these technologies.
... When we started talking about this, Chris asked about the Push API, could it work? Possibly, but it doesn't handle the encapsulation and mapping that I want to avoid.
... We're not the first to think about HTTP Server Push APIs. Considered in the Fetch spec. For example, Alex Russell's GitHub gist

-> https://gist.github.com/slightlyoff/18dc42ae00768c23fbc4c5097400adfb


Lucas: How do we move this topic process forward, who to speak to?
... (Slide 22) Of the candidates, we think Fetch Observer seems the best candidate.
... It's closely coupled to the HTTP/2 Server Push transport, although this may not be popular now.
... It has mixed results for web performance, possibly reducing page load times, but it could waste bandwidth pushing things the client doesn't need and potentially increase page load time, actually.
... People are asking if Server Push is really relevant to the future of HTTP.
... Brad Lassey at Google asked whether destroying push would anyone notice?
... But we think it is useful, to support unidirectional flows of HTTP resources.
... It would be a shame to lose this capability, miss out on exciting new use cases.

API Proposal

Lucas: (Slide 23) We defined a ResourcePushEvent, it aligns to a pseudo-request with a URL and metadata, and then a promise for a response.
... We can use it in a Fetch API like manner.
... It could work with other transport modes, e.g., FLUTE to deliver a resource and generate an event.
... It's more general, not tied to Server Push.

Multicast HTTP/QUIC to the Browser

Lucas: We did some experiments, with HTTP/QUIC to the browser.
... (Slide 25) We have an independent internet draft. It looks at QUIC and the HTTP over QUIC mapping.
... What would need to change to support unidirectional delivery? It relies on Server Push.
... (Slide 26) We have a prototype based on Google QUIC, work underway to move to IETF QUIC.
... A receiver on a Raspberry Pi, with our own library to reconstitute packets.
... If anyone is at IBC, we'll be demoing this in the Future Zone. Please come and see us.
... (Slide 27/28) We wanted to make a proof of concept. Can we cache the receive objects and feed into a player pipeline?
... We made a workaround component, but this still works in a push mode, sending push promises to the browser
... (Slide 29) A demo application, HTML page with DASH.js and a Service Worker.
... The Service Worker has the intelligence in it. We do discovery of the multicast, it subscribes and caches segments just ahead of the player requesting them.
... What actually happens is the Service Worker receives QUIC packets in userland. We used Emscripten and WASM in the Service Worker.
... Feed in packets, it generates a ResourcePushEvent.
... We want to put this into the browser core.
... (Slide 30) Tested working in Firefox and Chrome.
... (Slide 31) There's an Alt-Svc header that advertises the multicast group.
... (Slide 32) We published this in a BBC White Paper after discussion with Alex Russell and others.
... -> https://www.bbc.co.uk/rd/publications/whitepaper336


Wrap Up

Lucas: (Slide 33) To wrap up, we think there are compelling use cases for push mode interactions on the web.
... Server push is well suited, it avoids the need for custom protocols.
... We want to further the work on the experimental client to see if we can do true multicast in the browser.
... The browser could do the validation for content authenticity and integrity.
... Any comments or questions?

<Zakim> kaz, you wanted to ask if it's ok to distribute these (updated) slides to the public list (later)

Kaz: Can we share the slides with the M&E IG

Chris: Yes. Actually, I did already, will circulate in the minutes.

Kaz: Regarding the Fetch-based approach for server push, when I talked with some of my colleagues about server push, we thought it would have been better to use Fetch rather than WebSocket from standardization viewpoint, so I'm interested in this approach
... There's also the Web of Things WG, working on a higher level scripting API for IoT devices
... An API for servers to expose their capabilities and another API for clients to consume exposed capabilities
... This mechanism could be used for video streaming as well
... so should be interesting to discuss with the WoT group
... that's why proposing a joint session during TPAC

<kaz> scripting api draft

Giri: In ATSC we implemented WebSocket between the receiver and the Web App.
... We wanted to avoid reliance on a push API.
... It's not really a theoretical issue for us.
... We took care to avoid unnecessary reinitialization of the codecs, e.g., passing init segments to the MSE instance.
... How do you minimise reinitialization?

Lucas: I'm not a DASH expert, but we have a common init segment. The request flow would be: Get MPD, then get init segment for each representation.
... There's no period where we get more init segments. The init segment is quite small, doesn't benefit from delivering over multicast, so we'd leave it as unicast.
... We could follow up offline.

Giri: we have constraints, not all TVs are internet connected.
... We allowed the WebSocket interface to do sub-segments.
... Want to minimise dead air time, so we deliver bytes to the web application and the MSE instance.
... Is that considered in your implementation? Do you wait for the full segment to arrive before giving it to the browser?

Lucas: I think both can be supported. The library is built to receive the full segment, taking losses and the need to repair into account.
... The API I showed is returns a Promise for a Response. You could do an early resolution and then stream the bytes as they're ready.
... I think DASH.js has experimented with this, pipelining delivery of bytes end to end.
... I agree with the problem statement. I would encourage discussion around improvements on this for the Web platform.
... In general latency is bad for live media. Having the flexibility to optimise is beneficial.

Will: Thanks for a good presentation. I have a question about the solution space.
... Push vs pull doesn't change the bits going across the wire.
... From a traffic profile perspective, if you put in a caching server, you have the same profile as with a pull-based approach.
... In my mind, the difference is about timing from the client point of view.
... With smart clients, we can get the timing down. I'm worried about the effort involved.

Lucas: It's a valid comment. Our IP glide-path means we want to be fully IP.
... The cost of server infrastructure is high. We see the future as being a combination of unicast and multicast.
... Some way of managing peaks using HTTP.
... DASH benefits from having an explicit timeline, and knowing when is the right time to do things, and minimising this.
... Ad-hoc or unpredictable content delivery, we consider this is as a unidirectional flow.
... A callback function or mechanism for a push is needed, but this isn't in the Web platform.
... Jake Archibald has a good article on this. The content can be pushed to the client, but it sits in the push cache. It's only when the client makes an HTTP request that the content is delivered.

-> https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/


Will: Multicast is typically done on a managed network. So you'd need to accommodate very rapid fluctuations in throughput to do this on the public internet.
... The client still needs intelligence, as the server doesn't know.
... So there's still a complex feedback loop between client and server.

Lucas: This is true, the DVB work on ABR requires the separation of qualities all the way to the client.
... The client detects congestion differently than with TCP.
... General internet-wide OTT multicast isn't a thing yet, but it's feasible for us to work with partners to enable multicast over certain network segments so that content is also available on multicast.
... Users want a seamless experience, and our design aims to achieve this. We do a race, effectively. The master is always unicast HTTP, the multicast is an alternative option, advertised through the Alt-Svc. Which is different to the traditional HTTP based view.

Chris: We're out of time now. Thank you Lucas for joining us and presenting today.
... Our hope for this work is to continue the discussion among the browser vendors.

Next call

Chris: The next call is Tuesday 5th September, topic TBD.
... [adjourned]

Summary of Action Items

Summary of Resolutions

[End of minutes]




----------------------------

http://www.bbc.co.uk

This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.

---------------------
Received on Friday, 10 August 2018 15:06:33 UTC

This archive was generated by hypermail 2.3.1 : Friday, 10 August 2018 15:06:34 UTC