Initial Git access agreement and sandbox creation stays status quo.
Develop automated testing capabilities for everything from coder to xss screening to scanning the repository to ensure it contains Drupal-like code.
Run ALL modules through automated testing before allowing promotion to full projects
If a user does not have 'create full project' permissions, then have them submit a 'Project Submission' to an issue queue.
The issue queue is intended as a 'sanity check' only ... not a full technical review. The main issues should be addressed through automated testing. The main purpose is:
keep tabs on how well the automated tests are working,
provide an alternative path for modules that "can't" pass the automated tests (with valid reasons), and
provide corrective direction and/or suggestions for items which can't be caught via automation.
Any issue in this queue which sits for two weeks in Active or CNR state, without new comments, automatically gets bumped to RTBC.
Applicants which clear the queue get access to a 'promote' link for that project.
Create full project' rights allow an individual the option of promoting any project which passes the automated testing directly, without applying via the queue (though they can still use the queue, if they so choose).
Tie granting of 'create full project' rights to a competency demonstration (via a challenge quiz).
Revamping the Project Application Process
As many of you are aware, there have been a number of recent Project Application discussions occurring in the Drupal community, building on momentum kicked off at the related DrupalCon Core Conversation last month. The general consensus in these discussions has been that the current Project Application process is not meeting the needs of potential contributors and the Drupal Community, and that immediate action is required to help address and rectify this situation.
While these discussions have resulted in some excellent suggestions for improvement, many of these conversations have taken place primarily in person or on IRC. While these channels are great for individual debate and collaboration, they do not lend themselves to participation from the greater community at large ... therefore, a central repository. As a result, this page has been created to focus the conversation on a specific proposal for analysis and critique, and a starting point for tweaks and modifications ... with the hope that we can speed up the process, develop a consensus, document an actionable plan forward, and start executing on that plan without further delay.
So, as a starting point for further discussion, I'd like to offer the following proposal for your consideration.
I'll provide a flowchart of the full process proposal at the bottom of this article ... but for now, I'll break it into individual concepts in the interest of keeping things easily digestable.
Concept 1: Review Automation
In all of my discussions, this has been the one piece that everyone agrees on ... we need to automate as many of the review process checks as possible. The logical place to enforce these checks is just before the project is promoted to 'full project' status. I see these automated checks including the following:
1. Automated Coder Reviews (Must pass cleanly ... but at what Coder severity level? http://drupal.org/node/1299710)
2. Automated XSS Security Scans (http://drupal.org/node/786856)
3. Namespacing Review (ensure functions namespaced properly, verify project shortname uniqueness)
4. Licensing Review (scan for license.txt files, inclusion of external libraries ... how?)
5. Documentation Review (doc blocks, presence of README.txt, etc)
6. Repository Scan (repository contains code, key Drupal module files such as .info/.module present)
7. Demonstrated Basic Git Understanding (Maintainer has successfully created a -1.x branch, and -1.0 tag)
As the above checks would help to improve the overall quality of contrib projects in Code, I see them being applied against ALL sandboxes before promotion, not just those for new contributors. This brings up the question of non-typical projects with a valid reason for not passing one of the automated tests (or which trips false positives) ... which I will address in a bit.
For now, the 'typical' process for the majority of people (those who already have full project access) is outlined in the next image.
Concept 2: Users Without "Create Full Projects" Permissions
The above process is fine for existing contributors, but how do we handle the folks who do not have the "Create Full Projects" access? They do have the option of attempting the aforementioned challenge quiz, but making this mandatory would introduce a new (and signifcant!) barrier to entry for new contributors ... and we're trying to remove roadblocks, not put more up!
So instead, once a user's project is passing all of the automated tests, we have them create a ticket with their formal "Project Submission" notice in an issue queue. Here, human volunteers pick up the ticket and perform a sanity check on the following:
- Validate the automated test results look correct (ie. Yup! They all passed!)
- Ensure the sandbox project page has a reasonable description of the project (ie. Project Page contains more text than "this is my sandbox i'll add description later")
- Ensure the sandbox actually contains a project (ie. To discourage 'namespace squatting' and 'Just_for_Spam' projects)
- Ensure the applicant's chosen namespace makes sense for the project (eg. "mydrupalmodule" might pass a namespace uniqueness test, but let's keep it out of contrib)
- Ensure the applicant's chosen namespace doesn't conflict with existing projects (ie. avoid ending up with three different projects using the entityapi, entity_api, and entities_api namespaces)
- Ensure this isn't simply image slideshow module #374. (Allow 'duplicate' modules to encourage innovation ... these are intended as 'sanity checks, not 'justification points'!)
- Ensure the code doesn't include any external libraries.
- Ensure the git code repository isn't a total disaster.
Concept #3: Why keep the Queue?
- First of all, the queue name change is to reinforce that fact that this is no longer a 'project review' queue. The checks listed above are intended as quick sanity checks before rubber-stamping the application with an RTBC status.
- While it doesn't really lend itself as a full 'mentoring' opportunity, the human touch point does allow an opportunity to provide direction and correction for things the applicant may not be fully understanding, but which can not be caught via automated tests. (eg. "No, you don't need to create a new branch/tag after every commit!")
- Automated processes are great ... until you don't fit the assumptions they are built around, at which point they become a huge PITA. If a particular module gets blocked via the automated tests due to a testing bug or false positive, or contains code which falls outside the assumptions the tests are based on, then they can use the issue queue to explain the situation and request an exception ... allowing folks with 'full project access' an alternative when the automated testing says 'omgwtf? kthxbai!'.
So once the automated tests are in place, we mark all existing applications as 'postponed', with instructions to the applicants to re-apply once their module passes the automated testing requirements. (In fact, I'd suggest creating a new queue, and closing off the existing CVS and Project App queues entirely after directing existing applicants to the new process). And then we auto-prune to keep the queue from growing out of control again.
Concept 4: Application Auto-pruning
As long as the number of open issues is kept at a manageable level, and the action required to address an issue is relatively minor, than issue queues are actually very efficient. With the existing Project Application queue, however, the action required to make progress on an issue often required an hour or so on the part of the reviewer, and sometimes days of effort on the part of the applicant ... and most tickets needed to be revisited numerous times before being closed off. As a result, the number of open issues in the queue grew to the point where the reviewers could no longer keep up, and the sheer volume began to serve as a demotivator for people volunteering their time to perform reviews.
To prevent this scenario from re-occuring, there is a need to ensure that i) applications can be vetted and approved with a minimal time commitment, and ii) a process is put in place to prevent unattended applications from slipping through the cracks.
Moving from a 'review' process to a 'sanity check' helps to satisfy the first condition. To satisfy the second condition, any application which goes idle (no comments or updates) for two weeks, and has not been marked as 'needs work', 'postponed', or 'closed (won't fix)', is automatically granted an auto-RTBC.
The theory behind this is that auto-pruning the queue in this manner will help keep the queue size down to a manageable level, and the two-week delay is to provide a reasonable chance for reviewers to re-visit applications which were flagged as having potential issues, before the issue is pushed through (but still not permanently block said application when that re-visit doesn't happen).
Concept 5: Per-Project Promotion (without 'create full projects) permission)
The last piece of this puzzle that is required is the decoupling of the 'Promote Project' link from the 'Create Full Projects' permission ... instead enabling the ability to enable it on a per-project basis. When someone (who: require 'full project' rights? Git Admin?) promotes a Project Submission to 'fixed', this should trigger the appearance of the 'Promote Project' link on that project ... but the exact timing of when the project is promoted to official Contrib status should still be left in the control of the applicant/maintainer.
Concept 6: (Optional discussion) Separation of 'code approval' from 'developer approval'
One of my long-standing criticisms of the existing Project Application process is the fact approval was 'all-or-nothing', in that if your module didn't get approved then you didn't get approved ... or, alternatively, there was no way to approve a simple module without also giving the author free reign on contrib, even though the module may not contain enough code to demonstrate programming competence and knowledge of Drupal APIs/licencing/security requirements. One of the suggestions which came out of the post-DrupalCon discussions on this topic was the development of a 'quiz' to validate a user's knowledge of Drupal's APIs and common security issues. To me, this would serve as a great tool to validate the 'developer approval' side of this equation.
User passes 'Full Project Certification' quiz --> User is granted the 'Create Full Projects' permission
Putting it all together ...
Combining the above concepts/flowcharts gives us the following result ... a complex and convoluted flowchart, which should hopefully result in a much simpler process. Comments, opinions, and feedback encouraged!!!