W3C home > Mailing lists > Public > public-html-media@w3.org > April 2014

Re: Issue with updating/updateend

From: Aaron Colwell <acolwell@google.com>
Date: Thu, 17 Apr 2014 14:31:39 -0700
Message-ID: <CAA0c1bAithjQ8LxAok5brK4-F0eCSKfAONapuO+j+PpN5=or0w@mail.gmail.com>
To: Aymeric Vitte <vitteaymeric@gmail.com>
Cc: "public-html-media@w3.org" <public-html-media@w3.org>
On Thu, Apr 17, 2014 at 2:10 PM, Aymeric Vitte <vitteaymeric@gmail.com>wrote:

>  What I mean here is that this API just does not work and can not, unless
> I am proven incorrect, please answer "I still don't get the rationale for
> 'updating' and why appendBuffer does not queue the chunks by itself", first
> time I see a boolean against events, looks very approximate, what's the use
> of this boolean in a stream or promises context?
>

This would force the UA to have to arbitrarily buffer an unlimited amount
of data. The updating boolean is a form of backpressure.
SourceBuffer.appendBuffer() should always throw an exception if updating is
set to true. Stream and Promises were not even available when the MSE
standardization process was started. The Stream API spec got rebooted
recently so I don't think one can make a claim that converting to streams
would "just work". Unprefixed MSE implementations have been shipping for
quite some time and I don't think it makes sense at this point to convert
everything to Promises now. That would just create unnecessary churn and
pain for large existing deployments like YouTube, Netflix, and others.


> And no I am not going to file a bug, just take the youtube player, delay
> the chunks so the event loop breaks and you will see the issue. You can
> continue ignoring, eluding, disconsidering it, that will not solve it.
>

I don't particularly care for this tone. It doesn't make me want to help
you especially if you are unwilling to provide a simple repro case that
helps isolate the problem you claim is occurring. Saying "just take the
youtube player" is not sufficient given that it is a massive piece of
JavaScript and it isn't clear to me how to "delay the chunks" in the same
way you are doing it. If the problem is as fundamental as you claim, it
should be trivial for you to create some simple JavaScript to reproduce the
problem. Please try to be part of the solution and not simply complain.

Aaron


>
>
> Le 17/04/2014 20:16, Aaron Colwell a écrit :
>
> This is not a Chrome support channel. Please file a bug at
> http://crbug.com with a complete minimal repro attached and I can take a
> look.
>
>  Aaron
>
>
> On Thu, Apr 17, 2014 at 10:46 AM, Aymeric Vitte <vitteaymeric@gmail.com>wrote:
>
>> Insisting on this one, I spent quite a lot of time on this and it's still
>> not working perfectly, maybe other implementations don't have the problem
>> because they are not using a so small size of chunks and/or chunks are
>> never delayed so the events chaining never stops.
>>
>> //on each chunk received do:
>> append_buffer.push(chunk);
>>
>> //handle1
>> if ((!source.updating)&&(append_buffer.length===1)) {
>>
>>     source.appendBuffer(append_buffer.shift());
>> }
>> if (first_chunk) {
>>     source.addEventListener('updateend',function() {
>>          //handle2
>>         if (append_buffer.length) {
>>             source.appendBuffer(append_buffer.shift());
>>         };
>>     });
>> };
>>
>> This should work but it does not with Chrome, append_buffer reaches a
>> size of 0, the last chunk is being appended, a new chunk is coming,
>> updateend fires --> handle1 and handle2 can execute at the same time and
>> append wrongly the same chunk.
>>
>> It's not supposed to be possible but this is what is happening, maybe
>> related to concurrent access.
>>
>> A workaround is to maintain the events chaining by appending chunks of
>> size 0 using a timeout, it's working most of the time but sometimes
>> appending a chunk of size 0 fails too, for unknown reasons, on Chrome
>> chrome:media-internals only says 'decode error'.
>>
>> Specs issue or Chrome issue, I don't know, I still don't get the
>> rationale for 'updating' and why appendBuffer does not queue the chunks by
>> itself.
>>
>> Regards
>>
>> Aymeric
>>
>> Le 02/04/2014 22:46, Aymeric Vitte a écrit :
>>
>>  The usual code is something like:
>>>
>>> if (!source.updating) {
>>>     source.appendBuffer(append_buffer.shift());
>>> }
>>> if (first_chunk) {
>>>     source.addEventListener('updateend',function() {
>>>         if (append_buffer.length) {
>>>             source.appendBuffer(append_buffer.shift());
>>>         };
>>>     });
>>> };
>>>
>>> The use case is: chunks of 498 B and bandwidth rate of 1 Mbps, and this
>>> does not work at all, at least with Chrome, it might be a Chrome issue
>>> and/or a spec issue.
>>>
>>> Because between two 'updateend' events, the 'updating' property can
>>> become false, therefore you can append a chunk at the wrong place, if your
>>> remove the first part of the code (or replace it by if (first_chunk)
>>> {source.append...}) then the buffer chaining can stop if for some reasons
>>> the chunks are delayed.
>>>
>>> With streams the problem will disappear, without streams there is a
>>> workaround, but as I mentionned in a previous post I don't find this
>>> behavior normal.
>>>
>>> Regards
>>>
>>> Aymeric
>>>
>>>
>> --
>> Peersm : http://www.peersm.com
>> node-Tor : https://www.github.com/Ayms/node-Tor
>> GitHub : https://www.github.com/Ayms
>>
>>
>>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>
Received on Thursday, 17 April 2014 21:32:07 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:33:03 UTC