W3C home > Mailing lists > Public > public-html-media@w3.org > April 2014

Re: Issue with updating/updateend

From: Aaron Colwell <acolwell@google.com>
Date: Thu, 17 Apr 2014 17:30:11 -0700
Message-ID: <CAA0c1bCq4PvsW+7HV6MQRYw+_nCawSnt6MofVwf0WeOuEkby1Q@mail.gmail.com>
To: Aymeric Vitte <vitteaymeric@gmail.com>
Cc: "public-html-media@w3.org" <public-html-media@w3.org>
On Thu, Apr 17, 2014 at 3:59 PM, Aymeric Vitte <vitteaymeric@gmail.com>wrote:

>
> Le 17/04/2014 23:31, Aaron Colwell a écrit :
>
>  On Thu, Apr 17, 2014 at 2:10 PM, Aymeric Vitte <vitteaymeric@gmail.com>wrote:
>
>>  What I mean here is that this API just does not work and can not, unless
>> I am proven incorrect, please answer "I still don't get the rationale for
>> 'updating' and why appendBuffer does not queue the chunks by itself", first
>> time I see a boolean against events, looks very approximate, what's the use
>> of this boolean in a stream or promises context?
>>
>
>  This would force the UA to have to arbitrarily buffer an unlimited
> amount of data.
>
>
> Could you explain this please? The updating boolean can not stop the event
> loop, so data are coming, buffered, appended and apparently discarded at a
> certain point of time.
>

I don't understand what event loop you are talking about. The updating
boolean prevents any further data from being passed to appendBuffer(). It
is merely a signal whether it is safe to call appendBuffer() or remove().
The updating boolean is always set to false before the updateend event is
queued for dispatch. There is no guarentee that updateend will fire
immediately after the boolean is set to false. Queued events on other
EventTargets like XHR or a WebSocket may get dispatched before the
updateend event handler is fired.


>
>
>    The updating boolean is a form of backpressure.
>
>
> Same question, how can this boolean be used for backpressure?
>

It prevents SourceBuffer updates while there is an outstanding asynchronous
operation (ie appendBuffer() or remove()).


>
>
>    SourceBuffer.appendBuffer() should always throw an exception if
> updating is set to true.
>
>
> Why can it not queue the chunk instead of throwing?
>

Because that would require the UA to buffer an arbitrary number of queued
chunks. The intent is for the web application to manage buffer fetching and
only fetch what it needs or is willing to buffer.


>
>
>    Stream and Promises were not even available when the MSE
> standardization process was started. The Stream API spec got rebooted
> recently
>
>
> Yes and I am part of the "reboot", and backpressure is still a kind of
> open issue while I have given my thoughts about this subject [1], this can
> not be solved with a boolean of course.
>

>
>    so I don't think one can make a claim that converting to streams would
> "just work". Unprefixed MSE implementations have been shipping for quite
> some time and I don't think it makes sense at this point to convert
> everything to Promises now. That would just create unnecessary churn and
> pain for large existing deployments like YouTube, Netflix, and others.
>
>
> Maybe, streams for sure would help, now maybe I am missing a fundamental
> use of updating in this API, cf above questions.
>

Many of these vendors would like to use Streams and we have provisions for
it in the existing MediaSource API, but we are waiting for that spec to
stablize before focusing on it too much. Even with streams, the updating
boolean will still exist to prevent calls to appendBuffer() and remove()
while there is an appendStream() outstanding.


>   And no I am not going to file a bug, just take the youtube player,
>> delay the chunks so the event loop breaks and you will see the issue. You
>> can continue ignoring, eluding, disconsidering it, that will not solve it.
>>
>
>  I don't particularly care for this tone.
>
>
> Me neither for yours.
>

I apologize. I will try to use a calmer tone.


>
>
>    It doesn't make me want to help
>
>
> I don't need help, just this API to work correctly.
>

I believe the API is working correctly. The reason I am asking for concrete
example code is because I'm trying to determine if there is an actual bug
or if you are making invalid assumptions about API behavior. It is hard to
differentiate that with just words. Running code can help me see exactly
the situation you are running into.


>
>
>    you especially if you are unwilling to provide a simple repro case
> that helps isolate the problem you claim is occurring. Saying "just take
> the youtube player" is not sufficient given that it is a massive piece of
> JavaScript
>
>
> It's a simple piece of js.
>
>
>    and it isn't clear to me how to "delay the chunks" in the same way you
> are doing it
>
>
> I am not doing it, the network is, delaying chunks means you don't receive
> enough data and the event loop breaks.
>

Ok. I still don't quite understand what event loop you are talking about.
Is it possible your code is passing invalid data to appendBuffer()? For
example, if your code blindly calls
sourceBuffer.appendBuffer(append_buffer.shift()) when append_buffer is
empty then I believe an exception will get thrown because you are trying to
pass undefined into a method that expects to get an object. Is it possible
this is happening?


>
>
>   . If the problem is as fundamental as you claim, it should be trivial
> for you to create some simple JavaScript to reproduce the problem. Please
> try to be part of the solution and not simply complain.
>
>
> I do not complain but taking time to report a potential problem and trying
> to solve it, now in the light of your answers I still don't know if we are
> facing a spec issue or a Chrome issue, I don't get the use of the updating
> boolean.
>

ok. I am trying to help determine what is wrong, but I need you to work
with me here. Hopefully my explanations above will help you understand.


>
> In order to avoid any misunderstanding, I would not be replying here late
> my time if I did not think there could be an issue,
>

ok. I hope you can see that I am actually trying to resolve this issue. The
only reason I was trying to direct you to the Chrome bug tracker was
because it sounded like you believed this was a problem specific to Chrome
and wanted to save the list the burden of a support email exchange. It
appears I failed to accurately convey that. I'm happy to followup with you
off list.

Aaron



>
> Regards
>
> Aymeric
>
> [1] https://github.com/whatwg/streams/issues/13
>
>
>
>  Aaron
>
>
>>
>>
>> Le 17/04/2014 20:16, Aaron Colwell a écrit :
>>
>> This is not a Chrome support channel. Please file a bug at
>> http://crbug.com with a complete minimal repro attached and I can take a
>> look.
>>
>>  Aaron
>>
>>
>> On Thu, Apr 17, 2014 at 10:46 AM, Aymeric Vitte <vitteaymeric@gmail.com>wrote:
>>
>>> Insisting on this one, I spent quite a lot of time on this and it's
>>> still not working perfectly, maybe other implementations don't have the
>>> problem because they are not using a so small size of chunks and/or chunks
>>> are never delayed so the events chaining never stops.
>>>
>>> //on each chunk received do:
>>> append_buffer.push(chunk);
>>>
>>> //handle1
>>> if ((!source.updating)&&(append_buffer.length===1)) {
>>>
>>>     source.appendBuffer(append_buffer.shift());
>>> }
>>> if (first_chunk) {
>>>     source.addEventListener('updateend',function() {
>>>          //handle2
>>>         if (append_buffer.length) {
>>>             source.appendBuffer(append_buffer.shift());
>>>         };
>>>     });
>>> };
>>>
>>> This should work but it does not with Chrome, append_buffer reaches a
>>> size of 0, the last chunk is being appended, a new chunk is coming,
>>> updateend fires --> handle1 and handle2 can execute at the same time and
>>> append wrongly the same chunk.
>>>
>>> It's not supposed to be possible but this is what is happening, maybe
>>> related to concurrent access.
>>>
>>> A workaround is to maintain the events chaining by appending chunks of
>>> size 0 using a timeout, it's working most of the time but sometimes
>>> appending a chunk of size 0 fails too, for unknown reasons, on Chrome
>>> chrome:media-internals only says 'decode error'.
>>>
>>> Specs issue or Chrome issue, I don't know, I still don't get the
>>> rationale for 'updating' and why appendBuffer does not queue the chunks by
>>> itself.
>>>
>>> Regards
>>>
>>> Aymeric
>>>
>>> Le 02/04/2014 22:46, Aymeric Vitte a écrit :
>>>
>>>  The usual code is something like:
>>>>
>>>> if (!source.updating) {
>>>>     source.appendBuffer(append_buffer.shift());
>>>> }
>>>> if (first_chunk) {
>>>>     source.addEventListener('updateend',function() {
>>>>         if (append_buffer.length) {
>>>>             source.appendBuffer(append_buffer.shift());
>>>>         };
>>>>     });
>>>> };
>>>>
>>>> The use case is: chunks of 498 B and bandwidth rate of 1 Mbps, and this
>>>> does not work at all, at least with Chrome, it might be a Chrome issue
>>>> and/or a spec issue.
>>>>
>>>> Because between two 'updateend' events, the 'updating' property can
>>>> become false, therefore you can append a chunk at the wrong place, if your
>>>> remove the first part of the code (or replace it by if (first_chunk)
>>>> {source.append...}) then the buffer chaining can stop if for some reasons
>>>> the chunks are delayed.
>>>>
>>>> With streams the problem will disappear, without streams there is a
>>>> workaround, but as I mentionned in a previous post I don't find this
>>>> behavior normal.
>>>>
>>>> Regards
>>>>
>>>> Aymeric
>>>>
>>>>
>>> --
>>> Peersm : http://www.peersm.com
>>> node-Tor : https://www.github.com/Ayms/node-Tor
>>> GitHub : https://www.github.com/Ayms
>>>
>>>
>>>
>>
>> --
>> Peersm : http://www.peersm.com
>> node-Tor : https://www.github.com/Ayms/node-Tor
>> GitHub : https://www.github.com/Ayms
>>
>>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>
Received on Friday, 18 April 2014 00:30:39 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:33:03 UTC