- From: Glenn Maynard <glenn@zewt.org>
- Date: Tue, 18 Jan 2011 20:14:16 -0500
On Tue, Jan 18, 2011 at 7:32 PM, David Singer <singer at apple.com> wrote: > I'm sorry, perhaps that was a shorthand. > > In RTSP-controlled RTP, there is a tight relationship between the play > point, and play state, the protocol state (delivering data or paused) and > the data delivered (it is delivered in precisely real-time, and played and > discarded shortly after playing). The server delivers very little more data > than is actually watched. > > In HTTP, however, the entire resource is offered to the client, and there > is no protocol to convey play/paused back to the server, and the typical > behavior when offered a resource in HTTP is to make a simple binary decision > to either load it (all) or not load it (at all). So, by providing a media > resource over HTTP, the server should kinda be expecting this 'download' > behavior. > The only practical server-side problem I can think of is that capping the prebuffer may result in keeping HTTP connections open longer; rather than opening the connection long enough to download the video, it's likely to be kept open for the entire duration of the video. That's something to think carefully on, and influences implementations (eg. when the cap is reached, close the connection if the video is paused), but it doesn't seem like a showstopper. Not only that, but if my client downloads as much as possible as soon as > possible and caches as much as possible, and yours downloads as little as > possible as late as possible, you may get brownie points from the server > owner, but I get brownie points from my local user -- the person I want to > please if I am a browser vendor. There is every incentive to be resilient > and 'burn' bandwidth to achieve a better user experience. > > Servers are at liberty to apply a 'throttle' to the supply, of course > ("download as fast as you like at first, but after a while I'll only supply > at roughly the media rate"). They can suggest that the client be a little > less aggressive in buffering, but it's easily ignored and the incentive is > to ignore it. > > So I tend to return to "if you want more tightly-coupled behavior, use a > more tightly-coupled protocol"... > Browser vendors always have incentives to benefit the user at the expense of servers. Parallel HTTP connections are the most obvious example: although you could make pages load faster by opening 20 parallel connections to a server, as I recall most browsers are around 6, which is out of spec but a reasonable value that improves the user experience without hammering servers. Some browsers are less civil and open far more, of course; but enough browser vendors seem reasonable about this sort of thing, even when the incentive is to open the floodgates, for this to be useful. So if the suggestion is that a "maximumPrebuffer" setting wouldn't be implemented because not doing so makes the browser look better to the user, at least on first impression I'm not so sure. I think that's only true if it's impossible to implement capped prebuffering reliably, but I don't think anyone's made that argument. -- Glenn Maynard
Received on Tuesday, 18 January 2011 17:14:16 UTC