- From: James Graham <jgraham@opera.com>
- Date: Tue, 19 Mar 2013 16:44:51 +0100 (CET)
- To: Tobie Langel <tobie@w3.org>
- cc: public-test-infra@w3.org
On Tue, 19 Mar 2013, Tobie Langel wrote:
> This is jeopardized if the review tools are going to be specific to WGs
> or worse, individual reviewers. Review tools aren't like text editor.
> Unless there's a seamless bi-directional syncing of comments between GH
> and the review tool (that's *hard* and tends to defeat the purpose of
> using an external tool), the review tools is being forced onto the
> contributor (and the rest of the community that might that might want to
> follow along) by the reviewer. As I contributor, if I have to learn a
> completely different review tool whether I'm writing a test for the
> WebPerf WG (e.g. GH itself), the CSS WG (e.g. shepherd) or WebApps (e.g.
> Critic), I'm just going to give up.
I think that strongly depends on what type of contributer you are. If you
are a regular contributer there are significant enough benefits to
learning tools that make you more productive that it is well worth the
investment of your time to do it. If you are an occasional contributer
that perhaps isn't the case. Happily we can deal with that by good sense
rather than forcing lowest-common-denominator policies on everyone. For
example we can use the high quality toolchain with regular contributers
and with irregular contributers whose contribution makes it worth while
and who opt-in to the better tools and who agree to do so.
> While I understand the limitations of the GitHub review tools, they work
> fairly well for a number of projects with a lot more going on than ours.
> I feel like the reason this works is those projects adopt the "commit
> early, commit often" Git mindset, which guarantees small atomic commits
> and PRs focused on a single topic. Until we enforce that in the way we
> work with Git, it'll be difficult to assess faithfully whether the
> limitation of GH review tools are intrinsic or of our own making.
In the specific case of testcase development, the "lots of tiny pull
requests that can be dealt with by one person in a few minutes" approach
isn't going to work. The reality is that many tests are written by
companies as they develop the corresponding feature. For policy reasons it
is, in many organisations, not acceptable to release the tests as they are
written, but instead they are batch-released at the end of the project.
This leads, inevitably, to contributions with thousands of tests in
hundreds of files being added as a single dump. It is unrealistic to
expect people to split such a contribution up into hundreds of one-file
pull requests.
For this kind of commit the GH code review tools are a joke. As far as I
can tell there isn't even a way to mark which files you have already
looked at, or have multiple people review parts of a patch without
stepping on each other's toes. Such changes are hard enough to review as
it is; adding accidential complexity through bad review UI will just make
certain that they are never looked at. This is already a huge problem;
review is difficult, tedious, and no one is being paid to do it. If we
want to do review at all we have to use all the help that we can get.
> If we do end-up finding out better tooling is necessary, we have to
> carefully weigh the tradeoffs we're making by switching to a custom
> solution. At which point we'll also have to decide whether we keep the
> GitHub Issue system up (in which case we'll need to fully synchronize
> the comments on both systems) or close GitHub issues and switch the
> whole repository to using something different (which we'll need to have
> agreement on and also have to archive).
I don't understand why the GH issues system is relevant here.
Rather than hope that we can get by with bad tooling by pushing
unreasonable demands out onto those submitting tests ("no pull requests
that can't be reviewed in a single short sitting") we should actively
embrace the ability to build on to of github to improve our experience. I
don't think we are at the "can we live with substandard tools" stage; we
clearly - both in theory and in practice - have significant enough needs
that we can't. The only question is how to mitigate the downsides that
have been identified. For example with critic, I could make it add a
comment to the PR indicating that there is a critic review and, once some
work happens on that review, place another comment indicating that the
review is in progress on critic. The initial comment could say something
like "if you would prefer to use github for this review please comment
here". We could further rely on people's common sense to not foist
external systems on new contributers making small contributions.
I also notice that you keep talking about archiving comments. I don't know
what the use case for this is, so I don't know why you think it's needed.
Knowing that v1 of some submission had a typo isn't a very interesting
fact, as long as it gets addressed. So the only interesting thing seems to
me to be that a code review has 0 outstanding issues at the point at which
it is merged.
Received on Tuesday, 19 March 2013 15:45:30 UTC