- From: Aymeric Vitte <vitteaymeric@gmail.com>
- Date: Fri, 18 Apr 2014 10:29:03 +0200
- To: Aaron Colwell <acolwell@google.com>
- CC: "public-html-media@w3.org" <public-html-media@w3.org>
- Message-ID: <5350E24F.8020708@gmail.com>
We can continue off list if you prefer but again if I take time to write
here, it's because I think it can be of some interest for other people
that will use the API.
I am a bit perplex when you mention that even with streams the boolean
will be used, definitely I must be misunderstanding something about it.
I suppose that an "oustanding" operation can be something like appending
an enormous chunk, I think the boolean logic works in that case, but as
far as I understand it the API is designed for streaming so outstanding
operations are supposed to be unlikely.
What I call the "event loop" is the loop created by the updateend event
set on the source buffer, logically you are supposed to append a new
chunk when it fires, if no new chunk is available the loop stops, so
when a new chunk is coming later you have to handle it, updateend will
fire and the loop restart if other chunks are there.
This transition is where my problem is, I suspect that with small chunks
coming in quickly but sometimes not fast enough to maintain the loop,
the updating boolean has no time to switch to a state allowing to avoid
collision between two appendbuffer, the code (which basically is the one
below) is not passing any invalid data and I have checked that the
sequencing is correct, what I see is that when it fails the same chunk
was attempted to be appended.
If I could pass a reduced live test case I would do it but it takes
time... I can pass off the list the real use case that you can test and
corresponding code if you like, I think you can reproduce it without
difficulty with the youtube MSE demo player just reducing the size of
chunks retrieved by xhr and adding a delay every x chunks to make sure
the "event loop" stops before the next chunks are coming in, it can look
quite strange that I am the only one to have this problem but I believe
all existing implementations never break the loop, and I have tried a
lot of things, including buffering the small chunks and concatenating
them, without success so far, at a certain point of time appendbuffer
always fail when it happens that the loop stopped.
Regards
Aymeric
Le 18/04/2014 02:30, Aaron Colwell a écrit :
>
> On Thu, Apr 17, 2014 at 3:59 PM, Aymeric Vitte <vitteaymeric@gmail.com
> <mailto:vitteaymeric@gmail.com>> wrote:
>
>
> Le 17/04/2014 23:31, Aaron Colwell a écrit :
>> On Thu, Apr 17, 2014 at 2:10 PM, Aymeric Vitte
>> <vitteaymeric@gmail.com <mailto:vitteaymeric@gmail.com>> wrote:
>>
>> What I mean here is that this API just does not work and can
>> not, unless I am proven incorrect, please answer "I still
>> don't get the rationale for 'updating' and why appendBuffer
>> does not queue the chunks by itself", first time I see a
>> boolean against events, looks very approximate, what's the
>> use of this boolean in a stream or promises context?
>>
>>
>> This would force the UA to have to arbitrarily buffer an
>> unlimited amount of data.
>
> Could you explain this please? The updating boolean can not stop
> the event loop, so data are coming, buffered, appended and
> apparently discarded at a certain point of time.
>
>
> I don't understand what event loop you are talking about. The updating
> boolean prevents any further data from being passed to appendBuffer().
> It is merely a signal whether it is safe to call appendBuffer() or
> remove(). The updating boolean is always set to false before the
> updateend event is queued for dispatch. There is no guarentee that
> updateend will fire immediately after the boolean is set to false.
> Queued events on other EventTargets like XHR or a WebSocket may get
> dispatched before the updateend event handler is fired.
>
>
>
>> The updating boolean is a form of backpressure.
>
> Same question, how can this boolean be used for backpressure?
>
>
> It prevents SourceBuffer updates while there is an outstanding
> asynchronous operation (ie appendBuffer() or remove()).
>
>
>
>> SourceBuffer.appendBuffer() should always throw an exception if
>> updating is set to true.
>
> Why can it not queue the chunk instead of throwing?
>
>
> Because that would require the UA to buffer an arbitrary number of
> queued chunks. The intent is for the web application to manage buffer
> fetching and only fetch what it needs or is willing to buffer.
>
>
>
>> Stream and Promises were not even available when the MSE
>> standardization process was started. The Stream API spec got
>> rebooted recently
>
> Yes and I am part of the "reboot", and backpressure is still a
> kind of open issue while I have given my thoughts about this
> subject [1], this can not be solved with a boolean of course.
>
>
>
>> so I don't think one can make a claim that converting to streams
>> would "just work". Unprefixed MSE implementations have been
>> shipping for quite some time and I don't think it makes sense at
>> this point to convert everything to Promises now. That would just
>> create unnecessary churn and pain for large existing deployments
>> like YouTube, Netflix, and others.
>
> Maybe, streams for sure would help, now maybe I am missing a
> fundamental use of updating in this API, cf above questions.
>
>
> Many of these vendors would like to use Streams and we have provisions
> for it in the existing MediaSource API, but we are waiting for that
> spec to stablize before focusing on it too much. Even with streams,
> the updating boolean will still exist to prevent calls to
> appendBuffer() and remove() while there is an appendStream() outstanding.
>
>> And no I am not going to file a bug, just take the youtube
>> player, delay the chunks so the event loop breaks and you
>> will see the issue. You can continue ignoring, eluding,
>> disconsidering it, that will not solve it.
>>
>>
>> I don't particularly care for this tone.
>
> Me neither for yours.
>
>
> I apologize. I will try to use a calmer tone.
>
>
>
>> It doesn't make me want to help
>
> I don't need help, just this API to work correctly.
>
>
> I believe the API is working correctly. The reason I am asking for
> concrete example code is because I'm trying to determine if there is
> an actual bug or if you are making invalid assumptions about API
> behavior. It is hard to differentiate that with just words. Running
> code can help me see exactly the situation you are running into.
>
>
>
>> you especially if you are unwilling to provide a simple repro
>> case that helps isolate the problem you claim is occurring.
>> Saying "just take the youtube player" is not sufficient given
>> that it is a massive piece of JavaScript
>
> It's a simple piece of js.
>
>
>> and it isn't clear to me how to "delay the chunks" in the same
>> way you are doing it
>
> I am not doing it, the network is, delaying chunks means you don't
> receive enough data and the event loop breaks.
>
>
> Ok. I still don't quite understand what event loop you are talking
> about. Is it possible your code is passing invalid data to
> appendBuffer()? For example, if your code blindly calls
> sourceBuffer.appendBuffer(append_buffer.shift()) when append_buffer is
> empty then I believe an exception will get thrown because you are
> trying to pass undefined into a method that expects to get an object.
> Is it possible this is happening?
>
>
>
>> . If the problem is as fundamental as you claim, it should be
>> trivial for you to create some simple JavaScript to reproduce the
>> problem. Please try to be part of the solution and not simply
>> complain.
>
> I do not complain but taking time to report a potential problem
> and trying to solve it, now in the light of your answers I still
> don't know if we are facing a spec issue or a Chrome issue, I
> don't get the use of the updating boolean.
>
>
> ok. I am trying to help determine what is wrong, but I need you to
> work with me here. Hopefully my explanations above will help you
> understand.
>
>
> In order to avoid any misunderstanding, I would not be replying
> here late my time if I did not think there could be an issue,
>
>
> ok. I hope you can see that I am actually trying to resolve this
> issue. The only reason I was trying to direct you to the Chrome bug
> tracker was because it sounded like you believed this was a problem
> specific to Chrome and wanted to save the list the burden of a support
> email exchange. It appears I failed to accurately convey that. I'm
> happy to followup with you off list.
>
> Aaron
>
>
> Regards
>
> Aymeric
>
> [1] https://github.com/whatwg/streams/issues/13
>
>
>>
>> Aaron
>>
>>
>>
>> Le 17/04/2014 20:16, Aaron Colwell a écrit :
>>> This is not a Chrome support channel. Please file a bug at
>>> http://crbug.com with a complete minimal repro attached and
>>> I can take a look.
>>>
>>> Aaron
>>>
>>>
>>> On Thu, Apr 17, 2014 at 10:46 AM, Aymeric Vitte
>>> <vitteaymeric@gmail.com <mailto:vitteaymeric@gmail.com>> wrote:
>>>
>>> Insisting on this one, I spent quite a lot of time on
>>> this and it's still not working perfectly, maybe other
>>> implementations don't have the problem because they are
>>> not using a so small size of chunks and/or chunks are
>>> never delayed so the events chaining never stops.
>>>
>>> //on each chunk received do:
>>> append_buffer.push(chunk);
>>>
>>> //handle1
>>> if ((!source.updating)&&(append_buffer.length===1)) {
>>>
>>> source.appendBuffer(append_buffer.shift());
>>> }
>>> if (first_chunk) {
>>> source.addEventListener('updateend',function() {
>>> //handle2
>>> if (append_buffer.length) {
>>> source.appendBuffer(append_buffer.shift());
>>> };
>>> });
>>> };
>>>
>>> This should work but it does not with Chrome,
>>> append_buffer reaches a size of 0, the last chunk is
>>> being appended, a new chunk is coming, updateend fires
>>> --> handle1 and handle2 can execute at the same time and
>>> append wrongly the same chunk.
>>>
>>> It's not supposed to be possible but this is what is
>>> happening, maybe related to concurrent access.
>>>
>>> A workaround is to maintain the events chaining by
>>> appending chunks of size 0 using a timeout, it's working
>>> most of the time but sometimes appending a chunk of size
>>> 0 fails too, for unknown reasons, on Chrome
>>> chrome:media-internals only says 'decode error'.
>>>
>>> Specs issue or Chrome issue, I don't know, I still don't
>>> get the rationale for 'updating' and why appendBuffer
>>> does not queue the chunks by itself.
>>>
>>> Regards
>>>
>>> Aymeric
>>>
>>> Le 02/04/2014 22:46, Aymeric Vitte a écrit :
>>>
>>> The usual code is something like:
>>>
>>> if (!source.updating) {
>>> source.appendBuffer(append_buffer.shift());
>>> }
>>> if (first_chunk) {
>>> source.addEventListener('updateend',function() {
>>> if (append_buffer.length) {
>>> source.appendBuffer(append_buffer.shift());
>>> };
>>> });
>>> };
>>>
>>> The use case is: chunks of 498 B and bandwidth rate
>>> of 1 Mbps, and this does not work at all, at least
>>> with Chrome, it might be a Chrome issue and/or a
>>> spec issue.
>>>
>>> Because between two 'updateend' events, the
>>> 'updating' property can become false, therefore you
>>> can append a chunk at the wrong place, if your
>>> remove the first part of the code (or replace it by
>>> if (first_chunk) {source.append...}) then the buffer
>>> chaining can stop if for some reasons the chunks are
>>> delayed.
>>>
>>> With streams the problem will disappear, without
>>> streams there is a workaround, but as I mentionned
>>> in a previous post I don't find this behavior normal.
>>>
>>> Regards
>>>
>>> Aymeric
>>>
>>>
>>> --
>>> Peersm : http://www.peersm.com
>>> node-Tor : https://www.github.com/Ayms/node-Tor
>>> GitHub : https://www.github.com/Ayms
>>>
>>>
>>>
>>
>> --
>> Peersm :http://www.peersm.com
>> node-Tor :https://www.github.com/Ayms/node-Tor
>> GitHub :https://www.github.com/Ayms
>>
>>
>
> --
> Peersm :http://www.peersm.com
> node-Tor :https://www.github.com/Ayms/node-Tor
> GitHub :https://www.github.com/Ayms
>
>
--
Peersm : http://www.peersm.com
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Received on Friday, 18 April 2014 08:29:41 UTC