W3C home > Mailing lists > Public > public-webrtc@w3.org > February 2018

Re: WPT test dependencies

From: Alexandre GOUAILLARD <agouaillard@gmail.com>
Date: Mon, 5 Feb 2018 21:52:45 -0800
Message-ID: <CAHgZEq7Tj6aeHGGtEj+10_mF71rPiB07-GYSh+C1RyRunoabbg@mail.gmail.com>
To: Philipp Hancke <fippo@goodadvice.pages.de>
Cc: "<public-webrtc@w3.org>" <public-webrtc@w3.org>
Thank you for "bothering" going through all the tests this time around.

The current process had this gem slip through as well:
>     https://github.com/w3c/web-platform-tests/pull/5847
> In this particular case the spec was vague:
> https://github.com/w3c/web-platform-tests/pull/5847/files#
> diff-908cb5d58eaaf36277efb47afe2fe6c6R120
> and the "behave as if" resulted in the longest spec issue ever.
> At the same time the test assertion
> https://github.com/w3c/web-platform-tests/pull/5847/files#
> diff-908cb5d58eaaf36277efb47afe2fe6c6R136
> was not passing in Chrome or Firefox or Edge. This should have raised a
> flag. It did not.
> That test descriptions like "new RTCIceCandidate()" pass review is absurd.

Thank you for sharing your constructive opinion in the matter.

> A working review process would require to show which browsers a test
> passed in and discuss failures. If a test did not pass the ball goes to a
> person working on that browser for triage. This might result in a bug being
> filed against that browser or the spec.

I agree with you that it is a reasonable process. However, I do think you
are assuming too much. In any case, you are grossly misinformed.

1. review process.

All the tests submitted in the media stream and webrtc folders had at least
one reviewer from each browser attached to it, some automatically through
the author mechanism, some manually. Bernard was requested to review,
youenn, jib, ... The usual suspects. It's difficult for an external person
to represent and speak in place of browsers. If they decide NOT to review,
it's their choice. We cannot force them. You also cannot ask the person who
wrote the test, his subordinate or someone in his chain of command to do
the review. That would be conflict of interest.

2. run the test, show results, analyse failures.

2.a The GitHub hooks will run the tests on some browser configurations. I
do not know who has the responsibility of maintaining them, but I can
confirm that we do not have the capacity to add browsers configurations to
it. I can assure you that both Harald and foolip at least are on it

2.b a wpt.fyi dashboard is being developed which provides another level of
visibility. it is not fully automated yet, and does not run daily yet, but
it is there, and it is used, and checked regularly.

2.c The KITE project has daily runs against many configurations, including
the one you mentioned, and many more. We have an automated version of wpt
that we run daily as well, but which is not yet part of the public visible
runs, pending google approval. The results of those 300+ daily runs will be
part or webrtc.org possibly within days, and are reproducible from the open
source code. As far as we know, nobody is testing so many configurations
(including safari on iOS, chrome and firefox on android) so thoroughly,
everyday, or at least not publicly.

Everyday an analysis report of the daily failures (since we run also on
canary and other daily / nightly version) is done and provided to google
for first review. If we cannot isolate the source of the problem ourselves,
we contact corresponding browser team and see with them what is the best
way to address. Sometimes it means opening a ticket, sometimes it does not.
Nils has been mainly our point of contact for mozilla bugs, and we have
discover already a few nasty bugs, which we tweeted about. Same with
bernard, and Youenn/Eric.

I think the process in place is pretty much what you recommend we do. I do
not know why you assumed we did not. I am available for you anytime you
would have a question regarding those technology so you can stay current
and make informed statements.

> Such a process is of course much more time-consuming and expensive --
> unless you are trying to make this a "community effort".

It is a community effort by definition and from the start, I quote:
Community-Driven & Industry-Supported
A vibrant, industry-backed community is key to a successful and sustainable
testing effort.

I personally have never been paid directly or indirectly for the tests I
wrote, or webrtc-in-webkit, or early support of safari in adapter.js, or
porting the patches from safari 's fork of libwebrtc to google's, for the
many years I have been a member of W3C and IETF.

It does not mean that people that are not paid for it contribute tests that
are bad (and vice versa). I'm also against any kind of segregation. More

> > Don't be shy, "Be a programmer" (TM).
> looks like my last commit in the webrtc directory is more recent than
> yours.

Fippo, you are a great engineer. Your contributions to different open
source projects are widely recognised, and some of your technical blog post
are mandatory reading for anybody learning webrtc. We will never thank you
again for all that. I will wear a "Fippo is the best webrtc Engineer in the
universe" For the next three international conference I will assist to, as
a sign of penitence.

As a professional, I believe it is an important skill for me to be able to
acknowledge the good side of everyone, independently of my opinion of them
as persons and wether i would spend vacation with them. It's also important
to nuance, and balance any evaluation of the quality of the work people
produce. Not acknowledging and respecting effort, belittling other people
work, or segregate against anybody for any reason, is not part of my work
ethic, as I believe it generates an unproductive work environment. At the
end of the day, we have to work all together.

W3C, IETF, WEBRTC, This is not a competition. This is not a null-sum game.
Who was the latest, who was or is better, according to whatever perceived
metric, is not important. Making the technology better, making webrtc
successful so that everybody who worked on it ends up looking better, that
is, IMHO, the goal.

*Whatever one's contribution is, however small and insignificant, they
deserve respect for their effort. They did not have to contribute. Many do
not contribute. I would prefer those who make the effort to contribute to
feel empowered and rewarded, so they want to learn more, contribute more,
and contribute better.*

Soares is a young malaysian chinese who contributed those tests as his
first contribution to the global effort, and did all the heavy lifting
where and when nobody else was writing any test but harald, dom and myself.
The current thread makes him, and I suppose all the first time
contributors, and community contributors afraid to work in WebRTC, because
they feel the hard work will not be acknowledged or respected, and they're
going to be publicly burned for the 1% of their work that is improvable.
"not useful at all", "your tests is absurd", ... This is quite intense.

As a recognised webrtc expert, your opinion is important to all those in
the community that know less than you, and that is quite a few people.
Great power comes with great responsibility. I would love to see you
shepherd people in, instead of beating on that nail again and again.

Your remarks are technically correct. Please note that we already spoke
with Bernard, Harald, Huib, and the other browser vendors and have already
decided to go ahead as a community effort, to address the few remaining
shortcomings you pointed out. I allocated time, on CoSMo money, for Soares
to do the right thing. I specifically requested that someone from the edge
team be available for a better review process. We would welcome your review
as well once the tests are modified. We would just humbly request for you
to formulate your comments in a more nuanced way.


Dr. Alex.
Alex. Gouaillard, PhD, PhD, MBA
President - CoSMo Software Consulting, Singapore

Received on Tuesday, 6 February 2018 05:53:11 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 February 2018 05:53:12 UTC