Purpose of the Project Application queue

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
kscheirer's picture

The queue's purpose is not sufficiently stated, and the complaints I've seen arise from that.

The first purpose is to evaluate the applicant in order to grant "git vetted user" status.

We do this mainly by evaluating a submitted module's code. Too often applicants and even sometimes reviewers confuse the module-review portion with the main purpose of the queue. A more perfect module is not actually our goal.

A third portion of the process is code mentoring. This closely overlaps with the review process' checkbox for "Proper use of the Drupal API." And as one of the fuzziest requirements, this is where we get into trouble - too strict, inconsistency among reviewers, &c. However this is also one of the most important pieces - it's our 1 and only chance to influence new maintainers before they are let loose in the world of contrib.

So what's our goal again? To grant an applicant the power to publish an unlimited number of modules in any state they choose, not to publish one particular module.

I believe we can categorize most applications, and require applicants to choose a category in the initial post. This will focus the reviewers' actions and prevent complaints. Categories such as:

  1. I want to publish this one module and am not interested in receiving "git vetted user" status.
  2. I would like code mentoring, and to eventually be granted "git vetted user" status.
  3. I would like to be granted "git vetted user" status as quickly as possible, and am not interested in code mentoring.

Number 1 is already allowed by our review process, but applicants may be unaware it is an option.
Number 2 is our typical review process, with full reviews from a variety of reviewers.
Number 3 is the only real change I'm proposing. Reviewers will be limited to blocking issues only. For applications in #3, the "Proper use of the Drupal API" check should be inverted to "No improper use of the Drupal API". What's improper? Not using t(), security holes, or code that belongs in a hook_function(). Probably more items as well - but the point is that they're well defined issues which we document and can then reference in the review.

To those who say we're too strict (contrib doesnt require xyz...) - I say "Yes!" It's our one chance to guide you. Work with us on this one module and then you're free! Try out our code standard at least once, our git implementation, our file structure, etc. These are regarded as best practices in Drupal - try it once, and if you don't like it, you're free to publish future modules however you like.

To those who say the process takes too long - go with Number 1 or 3!

To those who say, I don't care what these random people have to say about my code - go with Number 3!

Those who say the reviewers are inconsistent - yes, that's the nature of people. But you chose Number 2! Change your application to Number 1 or 3 if you don't feel the mentoring is helpful.

Comments

Why isn't one of the options

David_Rothstein's picture

Why isn't one of the options this: "I would like code mentoring, and I would also like to be granted 'git vetted user' status as quickly as possible"?

An applicant who is interested in code mentoring shouldn't have to be "punished" (in terms of a delay in their application being approved) in order to get it... And as long as reviewers make proper use of the sandbox module's issue queue, I don't really see why the two have to be tied together.

For example: https://drupal.org/node/1343842#comment-5836684 (granted this is the only project application review I've ever done, but I felt it went well). In that review, I found a total of 15 issues, 2 of which were blocking (security issues) and the other 13 were non-blocking "code mentoring" kinds of issues. I marked the application "needs work" based on the first two, and RTBC as soon as those two were fixed. The other 13 were left to be dealt with whenever the module maintainer wanted to (and in fact he fixed many of them quickly). A key point is that because they were filed as separate issues, they were not lost as soon as the application was approved and the original issue marked fixed; mentoring could continue in the individual issues for as long as it needed to. If reviewers were generally encouraged to split their reviews in this way, perhaps it would make the process faster for everyone, while still encouraging applicants to ask for code mentoring (and all the benefits that come with it)?

It would be useful if there were a way for already-vetted git users to signal to the world that they still wanted new reviewers to come along and review their code for mentoring purposes (in the module's issue queue), though. I guess that doesn't really exist now.

I think we could handle this

kscheirer's picture

I think we could handle this case by splitting a review into Blocking / Non-Blocking issues. Or Required to fix / Optional. Something like that. The applicant would still get the full review, but know which items require action if they want to move on.

Yeah, exactly. A few other

David_Rothstein's picture

Yeah, exactly. A few other things that might help encourage that:

  • Make sure reviewers understand not to mark an application "needs work" for the non-blocking things (says so already at https://drupal.org/node/532400, but I wonder if people miss that).
  • Encourage reviewers to create separate issues in the sandbox project's queue (and link to them) for non-blocking things (https://drupal.org/node/1216962 mentions it as an option but says there are pros and cons - not sure what the cons would be).
  • Add some kind of documentation at https://drupal.org/node/1011698 for what happens when you are done the process but still want mentoring; this could include following up with the reviewers on still-open issues in your queue or links to https://groups.drupal.org/peer-review or elsewhere (thanks for that @sreynen; I think I once knew that group existed but forgot all about it, and it is not so active so could use some advertising).

With those changes I think the application categories you proposed could really be limited to just two:

  1. I want to publish this one module and am not interested in receiving "git vetted user" status.
  2. I would like more detailed reviews, and to be granted "git vetted user" status.

But either way, I think they'd be helpful.

The cons for creating

klausi's picture

The cons for creating separate issues are that this creates more work for reviewers and applicants. I can do a module review in 15 minutes, and if I find 10 issues and creating issue nodes takes 2 minutes each, then suddenly my review time goes up to 35 minutes instead of 15.

And somebody (the applicant) has to process those issues, close them, etc.

So creating separate issues is not appealing to me, but if other people want to - that is fine with me as well.

I'm wondering if requiring

fuzzy76's picture

I'm wondering if requiring reviewers to split their review into a blocking and a non-blocking section might be an easier way to the same goal (force them to realize the difference between the two types of "problems").

Peer Review group & implementation plan

sreynen's picture

It would be useful if there were a way for already-vetted git users to signal to the world that they still wanted new reviewers to come along and review their code for mentoring purposes (in the module's issue queue), though. I guess that doesn't really exist now.

It does exist, in many places, including this group:

https://groups.drupal.org/peer-review

I would love to see that group used more, but the Drupal community is also full of mentoring opportunities all over the place in less formal ways: in IRC, forums, meetups, issue queues, etc. This queue has an especially concentrated level of mentoring, but it's far from the only opportunity. If we think there's a large group of people coming here primarily for the mentoring, we should probably find ways to point them to the many other places where they can get that.

Back to the initial suggestion...

I believe we can categorize most applications, and require applicants to choose a category in the initial post.

That sounds good to me. What would implementation look like? Maybe tags for the different goals?

Could be tags or just part of

greggles's picture

Could be tags or just part of the template that people use to create their issue where an H3 part of the original post says "I want X/Y/Z"

I do also agree with David Rothstein's proposal, a bit. I guess we need to really use good words and explain them well so that people feel like they have a good reason to take something that is essentially labeled as "slowest road."

Issue tags or plain text in

kscheirer's picture

Issue tags or plain text in the application template. I think the latter would be easiest for new people.

It is a good point that there are many other options for code mentoring. Should our policy be to send those people to another queue, and come back when their module is "done"? Or we could invite reviewers from those other areas to come play here.

[double post deleted]

kscheirer's picture

[double post deleted]

My modus operandi in

klausi's picture

My modus operandi in reviewing stuff is mostly as a gate keeper: only critical stuff such as security issues or data loss should hold up an application IMO. Wrong API usage can be a blocker if there are so many different places in the code where stuff is done completely wrong. And by many I mean at least 5 places => then I get the feeling that the person is not knowing what they are doing, which then means the sum of problems is a blocker.

So I think I am already working in your number 3 category.

I'm glad that we have almost completely eliminated the "problem" people had with coding standards - applicants fix whatever pareview.sh tells them to fix. People trust robots - so we should automate whatever possible in mentoring applicants. Some examples we already have automated mentoring for:

  • coding standards (Code sniffer, in pareview.sh)
  • simple best practices like how to name classes etc. (DrupalPractice sniffer, in pareview.sh)
  • how to setup Git and a version-specific default branch (pareview.sh points to doc pages)
  • automatically pointing new applicants to automated review tools (PA robot)

So there might be even more room for automated mentoring, idea welcome.