Proposed Code Review Checklist (With Gates)

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
You are viewing a wiki page. You are welcome to join the group and then edit it. Be bold!

Note: The following is provided as a suggested 'checklist' for code reviews, with some modifications to the existing process thrown in.

The largest change is a divergence from the "Stick to a review from start to finish" concept, as the proposal below would encourage folks to jump on/off at the start/end of any given gate. The motivation behind this is to allow for less experienced reviewers to get involved in the review process without committing to a full review (as the 'full review' commitment can be a bit daunting for a new contributor).

A second suggestion is the use of tags to validate the clearing of each gate, as discussed in other sections of this group. While there is consensus within the group that tagging gates could become unwieldly; auto-creating tags is not something that can be done today 'out-of-the-box' ... so I've attempted to group the checklist into sections, thus minimize the tags used while still providing the abililty to use tagging to indicate where an application sits in the process.

The third is the addition of an 'Applicant Review' gate. I personally feel that the linkage between 'Module Review' and granting of the 'Full project access' roles should be severed, allowing the flexibility within the process to grant one without a dependancy on the other (think '3 module reviews = full access', or 'Module denied, but access granted' situations) ... which would then allow a 'probationary' period for new applicants, where we can ensure they are up to speed on managing the issue queue, project packaging, and git best practices before setting them free in contrib.

In any case, my proposed 'checklist' is posted below - feedback, comments, suggestions for improvement are encouraged!


(Proposed) Full Project Application Review Checklist

Component 1: Screening

"Pre-screening" step can be performed by pretty much anybody (other than the applicant)

Identity Check

  • Good Project Title
  • Project Page contains a detailed description

    • Concise introductory ‘overview’ (purpose should be clear in first few sentences)
    • Describes what the module does
    • Who is the module’s intended audience
  • Application includes a link to the Application Sandbox

  • Innovation – Module provides something ‘new’ (ie. Duplicate Module test)

Basic Repository Check

  • Ensure repository actually contains Code (may have empty ‘master’ if it also contains a named branch)
  • Basic Formatting (Spacing/Indentations)
  • No CVS ID Tags

Basic Documentation Check

  • Contains a README.txt
  • Implements hook_help()
  • Doxygen style comments exist for all functions
  • Doxygen @file blocks exist for each file

License Check

  • Code doesn’t include any 3rd party libraries
  • No license.txt files

Basic Coding Standards

  • Proper opening php tags (long form, not short); no closing tags
  • Runs through ‘Coder’ Module cleanly

If any issues found:

Reviewer Tasks
  • Add a comment to the 'Project Application', with details on any screening issues found.
  • Set the status of the Project Application to 'needs work'
Applicant Tasks
  • Address the issue(s) outlined in the Pre-Screening ticket
  • Add 'fixed' comment to any Pre-screening tickets, and update status to 'fixed' in the sandbox Issue queue
  • Add 'fixed' comment in the Project Application ticket, and update status to 'needs review'

If/when all checks passed:

Reviewer Tasks:

  • Check for 'Pre-Screening' issues in the sandbox issue queue, and ensure all items have been addressed (and applicant is managing the issue queue)
    • If issue queue is not being managed, add comment in the Project Application, asking applicant to update the ticket status in the Sandbox queue
  • Add a "Pre-screening cleared" comment to the Project Application
  • Tag the Project Application with CR:PreScreeningCleared
  • Set the status of the Project Application to 'needs review'
    • There is a suggestion that, instead of tagging, we could integrate the 'active' status to indicate that an application has cleared basic screening ... thoughts?

Component 2: Duplicate Module Check

Based on the amount of discussion this has caused on existing threads, I figured it deserved its own section ... but should be considered at each step of a review.

Duplicate Module Check

  • Search for Modules with similar functionality
  • Better off collaborating with an existing module author?
  • Promote collaboration over competition

If any issues found:

Reviewer Tasks
  • Add a comment to the 'Project Application', with details on any duplication issues found.
  • Set the status of the Project Application to 'needs work'
Applicant Tasks
  • Address the issue(s) outlined in the Module Duplication ticket
  • Add 'fixed' comment in the Module Duplication ticket, and update status to 'fixed' in the sandbox Issue queue
  • Add 'fixed' comment in the Project Application ticket, and update status to 'needs review'

If/when all checks passed:

Reviewer Tasks:

  • Check for 'Module Duplication' issues in the sandbox issue queue, and ensure all items have been addressed (and applicant is managing the issue queue)
    • If issue queue is not being managed, add comment in the Project Application, asking applicant to update the ticket status in the Sandbox queue
  • Add a "Module Duplication cleared" comment to the Project Application
  • Tag the Project Application with "CR:ReadyForReview"
  • Set the status of the Project Application to 'needs review'

Component 3: Technical Review

This is the bulk of the actual detailed 'code review' process, where some advanced knowledge/experience may be required.

Detailed Coding Standards

  • Module name-spacing used on functions and variables
  • Any custom variables are removed on uninstall
  • All user interface strings run through t()
  • etc.

Best Practices

  • Makes effective use of Drupal APIs
  • Makes effective use of the Theme layer
  • Makes effective use of jQuery for javascript code (ie. Drupal.behaviors)
  • Extensibility (provide API for other modules)
  • etc.

Security

  • All user-submitted content is filtered before output
  • Code uses proper database access functions and placeholder syntax
  • etc.

If any issues found:

Reviewer Tasks
  • Add a comment to the 'Project Application', with details on any technical review issues found.
  • Set the status of the Project Application to 'needs work'
Applicant Tasks
  • Address the issue(s) outlined in the opened ticket(s)
  • Add 'fixed' comment in each ticket when addressed, and update status to 'fixed' in the sandbox Issue queue
  • Add 'fixed' comment in the Project Application ticket, and update status to 'needs review'

If/when all checks passed:

Reviewer Tasks:

  • Check the sandbox issue queue, and ensure all items have been addressed (and applicant is managing the issue queue)
    • If issue queue is not being managed, add comment in the Project Application, asking applicant to update the ticket status in the Sandbox queue
  • Add a "Technical Review cleared" comment to the Project Application
  • Remove the 'CR:PreScreening Cleared' Tag on the Project Application, and replace it with a "CR:ReviewComplete" tag
  • Set the status of the Project Application to 'RTBC'

Component 4: Applicant Review

Hopefully, by the end of the review process, we have a good idea of the applicant’s capability regarding the following.

Issue Queue Management

  • Demonstrated knowledge of how to manage their issue queue

Git Best Practices

  • Demonstrated knowledge of git practices (branches, creating patches, applying patches, etc.)

Packaging and Naming

  • Demonstrated knowledge of Drupal packaging/branching/naming conventions

Comments

From someone new to the

dreamleaf's picture

From someone new to the review process and seeing a need for this, here's my brief thoughts.

I like the general breakdown, gate 3 (duplication check) should be a standard check in each gate though right from the start... different people will know of different modules and it's unlikely a single reviewer would catch every instance of duplication. A disheartening part of the process can come when the review seems to be going well only to have it then shot down for duplication.

This said I also hold to the line that duplication should not be a barrier to passing a gate, IF good reasoning can be presented for it and agreement can be reached.

My only other thought is that to allow people to dip in and out of the process (good idea) is hard to track without a physical checklist. Having several steps per gate, it would be easy to miss a check, or have duplication in the checking. Would it be a stupid idea to actually have a physical checklist and the responsibility of ensuring status's are set correctly by the reviewee.

So in practice, part of the project application process would be to attach the checklist in the queue, then when a reviewer does a check, they create an issue in the sandbox. When the reviewee gets to a gate and passes, they update the status.
This would provide the reviewee with a list of what is required, and the reviewers a snapshot of what has already been checked.
Maintenance of the checklist could be entirely the reviewees and the sandbox issue queue used by reviewers (open issue > checking licence > complete > mark as fixed > update checklist).

Sandbox Issue vs. Project App. Issue

mermentau's picture

So in practice, part of the project application process would be to attach the checklist in the queue, then when a reviewer does a check, they create an issue in the sandbox.

Yesterday I saw an application that fell behind in the process because his review started in the sandbox issue queue and nothing had been updated in the Project Application issues. I think using both is a recipe for confusion myself. http://drupal.org/node/1149210

Using the Sandbox

ccardea's picture

I agree there can be confusion if the Project Application queue isn't updated when it should be. Still I find the sandbox issue queue to be a useful tool. I think the bigger issue is how to document the review. I saw a project yesterday that had been marked RTBC, and there was no indication that any kind of licensing, module duplication, security, best practices or much of anything else had been done. There were a few comments about, well you need to work on your commenting, but there was no mention of any substantive issue, passed or failed. It was just, well I think we can go ahead and mark this RTBC.

Could Have Been Mine

mermentau's picture

I wasn't under the impression that a reviewer had to make note of all the things he looked at. I think if you look at most of the reviews that have been finally marked fixed you will find all sorts of variation on this.

I guess we could start reviewing reviews, but from following the writings of zzolo and others that have the power to "fix" I get the impression that rigidness needs to yield to the compliant efforts of the applicant. The idea is to have a process that's not overwhelmed by the number of applications, and yet make sure that applicants prove knowledge of what it's all about.

Remember that getting approved doesn't guarantee that future projects will be compliant. My example is here: http://drupal.org/node/1165368 As far as I can tell it's the 2 maintainers first project. When it started the 3rd party files where in sites/all/libraries but after that they moved them into the module itself.

It's Just Common Sense

ccardea's picture

There's never been a requirement or any discussion of how a review should be documented. If you were looking at this from a professional point of view, all of the variation in how things get done would be a problem. Just put yourself in the git admin's position. You have a project that's been marked reviewed and tested by the community, you open it up and there's a little discussion of some documentation issues and maybe a little discussion of some code, but there's no mention of licensing, or module duplication, or security. How do you know if anybody looked at these things? If you're conscientious, how can you approve the project when you don't know if anybody looked at these issues? From my point of view, this is an area of the code review process that needs improvement.

If one of the goals of the

dreamleaf's picture

If one of the goals of the review process is to assess/mentor people in the Drupal way of doing things then being familiar with their own issue queues should be essential. Nowhere else on D.O. are multiple issues allowed in the same issue, they always get split off into their own - is this any different?

Also, the whole "do-ocracy" nature of D.O. would say that the prospective maintainers of a project should be willing to be active participants in their review process. Maintaining a checklist would not be that much extra hassle AND it places the emphasis on them to keep the process moving (once it actually gets moving).

splitting up into separate issues is a bad idea

zzolo's picture

I am very much against splitting up specific parts into a sandbox issue queue. I know this would be much better organization, and would be more inline with how regular projects are maintained, this will create a huge barrier for reviewers and applicants.

Reviewers would have to go to multiple places to see what is going on with a review. Personally, I would stop doing reviews if this was the case. If d.o has some way of actually creating an aggregate issue that would support all of this, then it could be feasible.

Applicants will have to go to multiple places, in a system they might not be all that familiar with. Many applicants may not even know there is an issue queue for their project. They may not really understand the workflow that is assumed with issues. Yes, it is mostly on the applicant to push things along, but as mentors, making this easy is important. Many applicants come into the process not even understanding the "needs review"/"needs work" workflow. Making this more complicated is not good.

Overall, explaining this to new reviewers and applicants will suck. And yes, this is more a technological problem with the capabilities of d.o then the ideology about using issue queues.

--
zzolo

Sandbox issue queues work well for mentoring

sreynen's picture

Are you saying you're against adding gates as issues, or against using the sandbox issue queue at all? I agree on the former, but the latter is the whole reason we have issues queues on sandboxes.

I don't think use of the sandbox issue queue should be required, but I personally use it for specific bugs on every review I do and have seen none of the problems you've suggested. Many applicants probably don't know there's an issue queue on their project or aren't familiar with the statuses, but these are absolutely things they need to learn to effectively maintain projects. Pointing to these things during the review seems entirely in line with our mentoring goal. Simply saying "I added some issues to your project queue [link]. Change this back to needs review when those are resolved" has proven adequate explanation in my experience.

This discussion started with an example where the issue queue was used outside a review. That does indeed complicate the review process, which is why I also link from the application to the issue queue. But we also need to remember that sandbox projects are used beyond our review focus here. Not only do I think we shouldn't avoid use of sandbox issue queues; we couldn't even do that if we wanted to, as people arrive at that issue queue completely outside the review process.

good points.

zzolo's picture

Hi @sreynen. Good input. I totally love the sandbox issue queues and is one of the great improvements of the Git migration. I also think its awesome that you are using it effectively with your reviews.

But, I think it would really hinder getting new people to do reviews if we require it as a mechanism for doing review.

Yes, new contributors should know how to use their issue queue, but so far that has not been a goal of this process. I would be glad to add it, but it is one more thing to go through in the process (which is already pretty slow).

Overall, the application queue becomes a place to have a single point of communication between reviewers and applicants. If we make this distributed (using issues queues), I think it will greatly decreases the efficiency of the process (which is already really not efficient).

--
zzolo

I don't think it's a

greggles's picture

I don't think it's a requirement, but a best practice.

I tend to create the most important items as discrete issues in the sandbox issue queue and then use the [#123] style notation to link to them from a comment on the application. This works pretty well - if I see all of the issues I've linked are green then I know things are good and I can RTBC (after confirming them, of course).

I agree with sreynen's points that this helps teach them and also that we couldn't block use of sandbox issue queues if we wanted to.

So we can let this be something that the reviewer optionally does if they want to.

Personally in my reviews I try to state everything I've done: "Confirmed there is no license.txt file, confirmed X, confirmed Y, found problem with Z" or I'll say "I only looked at this one aspect and found these two problems..." so that the next reviewer who looks at it will know what remains to be reviewed, if anything. I definitely think that's a best practice for collaboration around reviews.

Checklist

ccardea's picture

It's not a bad idea to have an actual checklist that would be attached to the application, but it would require some custom programming. I don't know how that sort of thing gets done on Drupal.org. A simpler solution that you can implement right now is just to make a note in the project application queue of the checklist items that you have completed. The idea is to provide documentation that you actually did the work, so you should include notes that let an administrator know what you looked at and how any issues were resolved. Here is an example of how I do it.

Tried to document this approach

jthorson's picture

I've updated the page to include a stab at a process which leveraged both the Sandbox issue queue and the Project Application queue in managing a review ... but I don't like it.

While I agree that helping someone get experience in managing their issue queue is a good thing, it adds unneccesary complexity to an already cumbersome process. The Project Application issue currently gives reviewers a single place to look in order to determine the status of an application ... if we have to start checking every sandbox issue queue at the same time, this can only further slow down the process.

I think the 'happy medium' here is to standardize on using the 'Project Application' as the location for the checklist (as we have been to date), but leave the option open to reviewers to use the issue queue for deep-dives into issues that aren't solved within a single comment or two in the issue queue ... the 'module duplication' discussions are a perfect example.

The key to making this work, however, is that every issue opened in the sandbox queue MUST also be linked in the project application queue, to provide reviewers notice that the discussion is taking place ... but we need to avoid a 'spaghetti maze' of cross-links; so I wouldn't make it a requirement on every step.

Updated checklist to make the

jthorson's picture

Updated checklist to make the 'Issue Queue' management piece optional, but still a documented process if reviewers choose to use it.

Reason for Changes

ccardea's picture

I made some fairly extensive changes to this, because I think in some cases the cart was being placed before the horse. We need to make sure we're focusing on the right issues. Licensing, module duplication, overall understanding of Drupal APIs, and security are all considered high impact areas that can block a project application. Issues of coding style and documentation are minor and should not block an application. It doesn't make sense to beat up on somebody about coding standards and documentation when they could still be rendered irrelevant by a blocking issue.

Reason for Reverting

jthorson's picture

While your changes (and motivation) certainly make sense from the 'priority of issues' perspective, they wipe out one of my primary reasons for structuring things the way I did ... to enable more participation from less experienced members of the Drupal community in the review process.

The reason I placed the items I did in the 'pre-screening' stage, is that they are simple, two minute checks, which can be performed by anybody in the community ... in most cases, you wouldn't even need to be a developer. My hope here is four-fold:

  • more people can step in to help with the 'pre-screening' & 'screening' part of the review process,
  • the applicants will take care of the screening 'quick-fixes' before actually submitting their application,
  • the screening allows those doing actual code reviews to focus on actually reviewing 'code', rather than coding 'standards', and
  • having an application quickly pass through pre-screening and screening at the beginning of the queue is better than applying and not hearing ANYTHING for five weeks - it lets the applicant know that, while the process may be long, there ARE actually people looking at the applications.

While coding standards and documentation are 'minor' compared to the high impact areas you identified, they are also 'minor' fixes that can significantly help a code reviewer ... having proper file/function document blocks and code indentation make it easier to perform the actual code review - and I'd argue that (for the amount of effort involved to add them), things like this WOULD block an application. It's just that, in practice, they never do ... as applicants happily comply when a review ends with a 'just clean up these few items, and you're ready'.

Yes, it would be frustrating to a potential applicant to spend a day cleaning up their code to meet coding standards, just to be blocked down the road. On the other hand, this is all part of the 'education' of the applicant, to help them become a better Drupal contributor ... and if they have done their homework up front, they really shouldn't have any work to do to clear the screening step.

I've reverted the page back to my original structure, to make sure that the concept of 'let anyone perform the quick checks' does not get lost ... but I'm certainly still flexible and open to debate on the structure.

Get Your Priorities Straight

ccardea's picture

First of all, the priorities that I identified (Licensing, duplication, Drupal APIs, security) were not really identified by me, they were identified by webchick, in a post that I've quoted several times in this group. (You know webchick, #2 core committer). If you look at the video of her presentation in Copenhagen (I think), one of the problems that she identified with the code review process was reviewers focusing on unimportant issues like coding standards and documentation. So what you are outlining here is really perpetuating the problem.

You're focused on getting more less-experienced people involved, but that is not really the solution to the problem. First of all, it doesn't help if lots of people get involved focusing on the wrong issues. A pre-requisite for having less experienced people do partial reviews is having a way to track what has been done, which is not in place and has not been shown that using tags will work. Without a good tracking system, there is the potential for an application to sit for 4 weeks, then someone does a partial review, then it sits around for another 4 weeks until somebody else can get to it. So without a good tracking system, it may be better to stick to having people follow reviews through to completion.

No one seems to have considered the issue of balance. Since these initial screenings are supposed to be quick and easy, and if you have lots of inexperienced people doing them, you're still going to wind up with the same problem, a backlog of projects waiting for technical review.

I think you're making the process more cumbersome than it needs to be. The review process is really fairly simple:

Initial screening
* Licensing
* Module duplication

Technical Review
* Overall understanding of Drupal APIs and best practices
* Security

Polish
* Coding standards
* Documentation

If a technical reviewer feels that the documentation and coding are so bad that he can't complete the review, then its easy enough to say so and point the applicant to the coding standards docs. As I said before, it doesn't make a lot of sense to beat people up about documentation and coding standards, when those minor issues could be rendered irrelevant by issues that can block the application.

Priorities ...

jthorson's picture

First of all, I know who webchick is, and I've watched her presentation (3 times in the last two weeks) ... let's try and keep the preaching to a minimum. And this is of course open to interpretation; but her presentation certainly didn't give me the message that coding standards and documentation was in any way considered 'unimportant' in the review process. (They aren't the source of the largest issues; true ... but they are still an important component of the process.)

The purpose of this post, as advertised, is a "proposed" checklist with some "proposed" modifications to the process ... not to try and document the actual process. Let's take a step back from the 'backlog' problem, and look at this proposal more objectively.

I'm not suggesting that a project needs to clear ALL code review guidelines and documentation guidelines before they get around to a technical review. What I'm suggesting is that a non-reviewer can help perform a few simple 'pre-screening' checks, while the application is waiting for technical review. Why should it take an experienced reviewer (or a four-week wait) to point out that a project's repository doesn't actually contain any code for review? (Though I must admit ... I got quite the chuckle when you moved that particular list item down to the bottom of your 'Polish' stage!)

Yes, I am focusing on getting more less-experienced people involved in the process ... and putting a large emphasis on this. It is all about awareness ... I got involved in this group through the submission of a module application; and in a few short weeks have progressed to performing full reviews. I firmly believe that, if you can get more people involved in an up-front 'pre-screening' process (which doesn't require any significant knowledge or time committment), then a significant number of them will stick around and get involved in the larger code review process - which is better than depending on a core group of 4 or 5 technical reviewers. I'm convinced that this IS the solution to the problem ... it's just not the same problem as the one you're defining.

I could probably run the entire queue through the 'pre-screening' stage in a single afternoon ... so I don't foresee the scenario where someone waits for four weeks, gets pre-screened, and then waits for four more ... my hope is that the screening process would help filter out the issues that aren't actually ready for review. Based on the applications that I've seen, I'd suggest that this step alone could shorten the 'needs review' queue by about 15%, in the first pass alone.

You state that noone has shown that the use of tags for tracking will work; so I've taken the feedback from other threads and tried to simplify it to something that CAN work ... and can work with the tools and environment that we have today, without any additional coding required on the Drupal.org site.

I did attempt to define a process which leveraged the Issue Queue more thoroughly than the current one. This was in direct response to the related Issue Queue comments in this thread, and I would freely agree that such a process is too cumbersome to be workable. However, I do not feel that adding the pre-screening checks as an initial gate contributes to an unwieldly process ... from a reviewer's perspective, having someone provide function doc blocks and proper indentations can ONLY make the job of reviewing a little bit easier.

On a related note, my

jthorson's picture

On a related note, my motivation for this proposal came from my 'Hold an official Drupal Contrib Review Week' post (http://groups.drupal.org/node/150729).

If you position the licensing and module duplication checks as the first step in the process, than this is the first exposure that a new reviewer gets to the process. What I've learned, however, is that these two items also lead to the longest debates and most heated arguments. This creates a rather 'unwelcoming' environment for someone who's coming through the process for the first time ... and not one that makes me want to stick it out and contribute further to the process; especially when it starts to feel like a case of "the most stubborn wins".

However, even the most introverted individual can get involved in the basic pre-screening steps that I proposed, and lend some value back to the drupal community ... one thing I DID get from Webchick's presentation is that we need to make the whole process a little more friendly (for both applicants AND reviewers) ... I don't think her title "Why our CVS Application process SUCKS" was meant to indicate it is supposed to suck! ;)

If we can make that first introduction to the process a little more friendly, then a 'Contrib Review Week' proposal might actually work ... but I'm not volunteering my time to encourage people to come 'jump in the wrestling ring' that is the licensing/duplication discussions which served as my intro to this process!

Focus on the Important Issues

ccardea's picture

# There will be an approval process of some kind, similar to the existing CVS account approval process, but far less daunting and time consuming, because:

* It will focus only on "big impact" items: overall understanding of Drupal's APIs, security review, license compliance, blatant duplication of other projects.
* Reviewers can use standard tools like commit logs, diff, and the issue queue to understand the code and its evolution and get a "sense" of the maintainer, thus removing a lot of pain and guesswork from the approval process.
* All non-critical follow-ups (coding standards compliance, naming conventions, etc.) can be handled in the standard issue queue way, and addressed by the author either prior to or after their sandbox makes the transition to a regular project.

This is a quote from webchick that spells out the review process in about three sentences. If you're not on board with this, then you're part of the problem. This wiki is rejected for thread duplication. There's already a post that's still on the front page Review Process - Basic Steps. That post contains a link to a third page, which is where I found the link to this quote.

Collaboration, not Competition

jthorson's picture

This wiki is rejected for thread duplication.

Sorry, but your opinion is rejected.

i) This is not a duplicate of the 'Exisiting Process' - it's a proposal for changes to said process.
ii) 'New Discussion Topics' are not subject to anyone's 'approval'
iii) Your stubborn refusal to allow expression of a differing opinion comes across as un-professional and childish.

From your own "quote" (emphasis added):

* All non-critical follow-ups (coding standards compliance, naming conventions, etc.) can be handled in the standard issue queue way, and addressed by the author either prior to or after their sandbox makes the transition to a regular project.

Trying to address these issues up-front while waiting for a technical review is not a "problem". The only "problem" I've seen since I joined this group is people letting egos get in the way of progress.

How about this quote ... I'm sure you've seen it before ...

Collaboration, not competition

If you don't agree with the proposal, fine. You've stated your objections. But one person's opinion does not a consensus make - and your personal attacks (ie. "you're part of the problem") completely undermine and distract from any legitamacy that your arguments may contain.

Collaboration is good

ccardea's picture

The idea is to collaborate with others, not to insist that others collaborate with you. I've been consistent about where I take direction from.

no steps, please

zzolo's picture

Things seem a bit heated here. I like all the discussion, though.

So, there have always been a lot of arguments about this process. I have read them all, and heard them many times. It is all about balancing completeness, thoroughness, community values and making the process manageable and sustainable.

We will NOT, I repeat, NOT make a perfect process. Please keep these things in mind going forward. Our goal right now is to define a good enough process, and as well, provide a review and updating process (see this page). I mention this, so that we can be a little forgiving with this. I just posted that we should have a draft of this sort of thing by DrupalCon London (also including a review period). Let's work on getting "something" together for now and work on incremental changes.

One thing I would like to point out is that we should not be making specific steps (that need to be done in succession), except for a full description and module duplication check up front, as these are extremely important (knowing that the applicant has looked at the contrib space is really important). It seems like making things a little more manageable is more about letting multiple people do a review, and we can't really expect reviewers to do things in succession, nor do I think its all that beneficial.

--
zzolo

Problems with multiple reviewer

ccardea's picture

The scenario that I talked about above is no longer a potential problem, I ran across an instance of it. This project has been in the queue since March. A couple of people took a look at it during the period from March 1 - April 17, then walked away from it. It sat for another 5 weeks before I found it.

This could actually be used to game the system, if reviewers are going to the back of the queue looking for projects to review (like I do). Its conceivable that someone with a project waiting for review could post comments in other applications so that they look like they've been touched recently.

All too common.

jthorson's picture

Its conceivable that someone with a project waiting for review could post comments in other applications so that they look like they've been touched recently.

I certainly hope that this thought isn't the source of your recent hostility towards me ... and if it is, please let my reviews speak for themself.

Getting back on topic:

I don't think that anyone here disagrees that the potential for issues getting lost in the queue is a real problem ... I've personally pulled applications dating back to last October. But I consider this an issue which already exists with the current process; and don't agree that the addition of a 'pre-screening' checklist for less experienced reviewers would compound it in any way.

Unprofessional?

ccardea's picture

OK, you want a more professional evaluation of your proposal? I was hoping you'd get this without me having to spell it out for you. I don't like the long-windedness of the proposal. I don't want to have wade through all that stuff, with applicant tasks and reviewer tasks, especially when the whole process can be summarized in three sentences. You're doing the same thing that we knock on duplicate modules for, going full bore without looking to see what's already been done. And you're pushing your own agenda when you should be getting on board with the existing leadership. I think its unprofessional to be doing this with only two weeks experience. You should work with the process a little more before trying to change it.

I'd argue that having a

jthorson's picture

I'd argue that having a 'fresh perspective' from someone actively involved in the process can only be a good thing ... in my experience, those who have been involved with a process the longest are often the ones most blinded to its flaws. Perhaps taking a module through the process yourself would change your viewpoint as well.

In any case, this is where a more civil approach might have allowed me a chance to clarify ... Regarding length, I'm only proposing the 'blockquote' text as the actual checklist. The rest was added in trying to envision a process involving the 'issue queue' (which you seem to be a strong proponent of); and only outlines the steps which would be taken in the existing process anyway. As a proposed checklist, noone is making you use it ... it's positioned as a tool for someone coming into the group with absolutely no background or understanding of the process, and allowing them to start contributing and provide value to the process on day one - without having to spend time combing through documentation and trying to figure out what to do.

Regarding your other points ... you seem to think my participation in this group is part of some secret hidden agenda. I'm not sure what gave you that idea, but feel free to stop by the IRC channel and elaborate further. You only assume that being new to this group means that I haven't done my research and haven't looked to see what's already been done ... I assure you that this isn't the case. The whole purpose of this group is to To discuss, document, and rally around the Full Project code review process, and proposing modifications to the process is a completely valid and essential component of this mission. If the 'existing leadership' has a problem with my enthusiasm and desire to contribute (ie. 'going full bore'), then please let them speak for themselves. But it's hypocritical to suggest I need more work with the process before suggesting modifications; when you seem fine taking on the role of 'traffic cop' despite your equivalent experience.

Now can I respectfully request that you please try to keep future comments relevant to the post, and of a more 'constructive' and less 'emotional' nature? Responses and personal attacks like the one above serve only as distractions to the actual work some of us are trying to accomplish here, and provide no value to the group or discussion at hand.

And on a slightly related topic, in the future please click 'reply' directly on the comment that you're responding to ... bouncing your responses down to the bottom of the list makes it extremely difficult to follow the conversation thread, and only compounds the pollution problem these discussions introduce.

It figures

ccardea's picture

It figures that you would miss the point and continue to push your agenda, and project hostility and emotion on me because I disagree with you and your approach. I'm really not here to engage in this type of personal argument, especially when its clear that I'd only be wasting my time.

Agenda

jthorson's picture

Please contact me via my contact form and explain this 'agenda' I am pushing ...

Slimming screening

sreynen's picture

I think we could slim down screening a bit. As a first step in that direction, I removed 2 of the 3 referenced to Doxygen documentation. I think (hope) we can all agree we only need to check for that once.

I'd also suggest moving "Runs through ‘Coder’ Module cleanly" to a later step, as it makes more sense to do that along with the other steps that generally involve installing the project in a local Drupal install. Everything else in screening can be done without downloading the code at all, which makes it much more approachable as a quick task anyone can do with nothing more than a browser required. I think keeping screening a quick, approachable task is pretty important to encouraging more people to do it.

Coder step

jthorson's picture

I struggled alot with the Coder step ... on one hand, it's something that pretty much anyone with a server could do, even without advanced knowledge of module development; which is why I consider it a pre-"technical review" step. On the other hand, it takes a little more effort (actually installing the module) to validate, and as it isn't a 'quick pass' item, it probably doesn't belong in 'screening' either.

Fundamentally, it would be nice to have this done in advance of a technical review ... I can envision a future where an individual could just run through the cue, installing each module and running it through coder, and posting whether or not it passes cleanly ... something that the module author themselves probably should have done (and therefore part of the 'screening' process in the sense that it's validating the application). That's why, in my first cut at this checklist, I had seperated out a 'pre-screening' step (containing the 'quick-hit' items), and a more in-depth 'screening' step (which included items like the 'coder' review).

In the end, of course, this all becomes a bit of an 'academic' discussion ... as the 'correct' fix would be to automate the coder module check, and not have anyone (screener or reviewer) need to worry about it. :)