Project ratings and reviews for redesign

You are viewing a wiki page. You are welcome to join the group and then edit it. Be bold!

One of the major new features proposed in the redesign site prototype for making it easier to find the right modules for end users is the introduction of Project ratings and reviews. This post is a proposal for how we're planning to implement these features as part of the redesign. If there's sufficient traction for this proposal, we could actually roll this out independently of the redesign theme launch itself. So, if we can converge on a plan, we could actually see this live in the next few weeks...

Asking the right questions for ratings

Unfortunately, the redesign just proposes a simple "community rating" concept, using a single 5 star voting mechanism. However, for the information to be meaningful, you have to ask the right question that you want people to vote on. The point is not a popularity contest, it's to help give users a sense of what to expect if they start using a given module or theme on their site. We propose asking 4 key questions to allow users to rate subjective factors about any given project beyond the objective factors that can be measured via the automated metrics we'll also be adding as part of the redesign (more on the automated metrics in a future post, stay tuned). I checked out, and they also use a multi-dimensional rating system. Although a few of the questions are good, I think the others aren't that helpful. Here are the things we're planning to ask people to vote on if they're going to rate a given project:

Works as intended
Does the project actually do what it claims it does? Does it work well?
Well maintained
How responsible is the maintainer? Are there new releases at a good frequency? Are they careful about testing before releasing?
Ease of use
How easy is it to use this project? Is the UI self-explanatory?
Is there good documentation? Is it clear? Is it sufficient?

Mostly, we'd just show these values independently, and let users decide for themselves which ratings are the most important when trying to decide if they want to use a given project. However, there are some places in the redesign that rely on a single "Top rated" notion, so we're going to have to provide an average rating across all questions.


In addition to numeric ratings on the above questions, we want to give users a chance to provide a free-form review of each project. Although there's a chance some users will use the reviews as another channel for support and bug reporting, we believe that with sufficient warnings on the form for creating a review, and a general culture on of collective moderation, this won't become a drastic problem.

The redesign also provides for a "was this review helpful?" feature, so we can use something like vote up/down and bury the useless reviews and promote the more useful ones.

Implementation details

I think the simplest and best approach for getting all this working would be to introduce a new "project review" node type. The ratings themselves could just be numeric CCK fields on these nodes. I think it makes sense that you have to create a review if you want to rate something, since it's a good thing to be able to provide a rational for the way you voted. None of the fields would have to be required, so you could just write a review and not cast a rating on any of the questions, or just rate a few of the questions but not all.

So, this project_review node type would have the following fields:

text area labeled "Review"
node reference to point to the project the review is about -- this would be pre-populated based on a final argument to node/add/project-review, e.g. or something
works as intended
CCK numeric field with 0-5, optional, defaults to null (not specified)
well maintained
CCK numeric field with 0-5, optional, defaults to null (not specified)
ease of use
CCK numeric field with 0-5, optional, defaults to null (not specified)
CCK numeric field with 0-5, optional, defaults to null (not specified)
core version
use the same taxonomy that project_release nodes use for the core version (to target the review to at least a version of core)
project version (maybe)
optional node reference to point to a specific release node of the project -- not sure if we need this level of detail. yes, ratings + reviews could be pretty different across major versions (e.g. the 6.x-1.* branch sucks, but 6.x-2.* is much better). But, I don't think we want to require pointing to a specific version all the time. This aspect still needs a bit more thought.

The idea would be to add a project_review module as part of Project itself. It would just expose this CCK node type and the required fields programatically, so that a site would get these defaults, but would be free to modify the node type as they see fit. project_review would also be responsible for tallying the votes for each project (probably in its own table of totals), and then exposing these totals to both views and solr so that you could do interesting sorts and facets for the project browsing pages.

We could also enable comments on the review nodes themselves, to give people a chance to reply to the reviews in some way (e.g. someone writes a scathing review but didn't RTFM or something). This isn't central to the whole thing, but it seems like it might be useful to consider it as an option. Certainly easy to disable later if we don't like it.

Not sure if we need it, but project_review.module could also be responsible for some trivial validation like the same user can't submit multiple reviews for the same version of the same project, etc. This could be added later if it turns out to be a problem.

It'd probably also be worth exposing a default view that shows up as a tab on user/N/reviews that shows all the review nodes written by user N. A link to this view could be automagically appended to all reviews, so if you find a helpful review from someone, you can easily find any other reviews they have written.

Project UI

For now, the main changes to the project node UI for all of this would be:

  1. View of project_review nodes that takes the project ID as an argument, e.g. at node/N/reviews, perhaps just as a tab on the project nodes, or maybe its own page like the node/N/releases view.
  2. Block on the project node itself with the summary of ratings for each question and the total number of votes/reviews, with a link to a page for all reviews + ratings, and probably a link for "Rate this [project-type]".
  3. At the bottom of the project nodes, embed a view of the "top" review nodes -- i.e. the N highest voted (most helpful) reviews, and a "more reviews" link. There'd also need to be an "write a review" link that takes you to node/add/project-review/[project-short-name].

Project browsing UI

The main download page and other project browsing pages in the redesign propose at least adding a sort for "Top rated" and for "Most reviewed". So, we'll just add those values to the solr document for project nodes, and then expose those as available sorts.

If we really wanted to get fancy, we could expose each rating total separately, too, so that you could order by the highest rated projects under "well maintained" separately from the best documented, etc. We'll probably put all of those in the solr doc, although we might not expose all of them as available sorts and filters by default. Perhaps there will be an advanced search project finder page that lets you get crazy with all these options.


From my perspective, it doesn't look like VotingAPI actually helps us at all for the main project ratings:

  1. You want the voting UI while creating/editing a review node, not while viewing the project nodes themselves
  2. We already have a good storage mechanism for the votes -- CCK fields on the review nodes
  3. We're going to want some specialized logic for creating the weighted ranking values

However, I'm not super familiar with Voting API, and I'm open to the possibility I'm totally wrong about this, and Voting API would save the day in various ways. So, I'm hoping someone more familiar with Voting API can comment here and either confirm that it's not really a good match for what we're proposing, or explain how it would fit in and what problems it would solve.

Certainly Voting API + vote_up_down would make perfect sense for rating the reviews themselves ("was this review helpful?"). I'm just talking about if Voting API makes any sense for the main project ratings themselves, the questions I'm proposing we record via numeric CCK fields on the project_review nodes.

That's the proposal. Please comment with any thoughts, concerns, corrections, or suggestions for improvement. After maybe a couple of weeks of gathering feedback, we'll move into implementation and hopefully roll this out on d.o in the near future. None of this is particularly rocket science. It should all be fairly straight forward to implement.



Net Promoter Score (NPS) & Microformats

rport's picture

While this subject is still in its early stages, you might also like to consider the following question...

Ultimate Question
Would you recommend this project to another Drupal user or developer?

The ultimate question is a numeric rating used to measure loyalty it is well documented in the book titled The Ultimate Question written by Fred Reichheld and elsewhere on the web...

Normally the ultimate question is used by organizations to monitor loyalty using a simple rating system to identify customers as either promoters (P) or detractors (D). This enables organizations to measure their performance through the customers eyes.

There is sufficient evidence that this measure, called the Net Promoter Score® (NPS) can be used in other forms such as the proposed Drupal project rating system..

NPS is calculated using a very simple formula as follows;

NPS = P — D

How to Calculate the Net Promoter Score

Respondents are given the choice to rate a module on a 0-to-10 point numeric rating scale which are categorized as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

To calculate a modules NPS, take the percentage of respondents who are Promoters and subtract the percentage who are Detractors.

Adding the dimension of time...

By adding a dimension of time, that is tracking when a module is rated we could also track the trends in the NPS of a module as a developer or the community responds to user requests for features, bug fixes, documentation, patches, etc...





While on the subject of ratings, I suggest that it would also be great if this could support microformats such as those outlined by Yahoo and Google;

  1. Yahoo! Search Monkey
  2. Google Rich Snippets

Ultimate question or not

dww's picture

Thanks for your feedback. The thing is, just because someone would recommend a project to someone else doesn't mean it's the right module for the job. ;) That's why I'm wary of an "ultimate score" that's basically a measure of how much you like a module or theme. The perfect module for one site might be a total disaster for another site. So, I'd rather steer clear of this approach, and instead focus on rating the subjective factors that can't be automated (e.g. the statistics about the issue queue, commits, etc) and then let people decide based on all the available data. Ultimately, the reviews and case studies that point to the modules are going to be the most helpful to decide if the module is the right fit for what you're trying to do. I don't really see this as a mechanism for measuring brand/customer loyalty. I don't believe that's the right model for the problem we're trying to solve here. But, I'm certainly open to hearing other perspectives on this.

That said, the links on microformats and "rich snippets" are very helpful. I was assuming we'd move that way once d.o is running D7 and we've got all the power of RDFa at our fingers. But, I suppose we could provide some of that markup now directly in the theme function where we're rendering the HTML for the rating block on project nodes. Definitely worth considering this, even in D6. However, it gets back to the problem of collapsing the whole thing into "one big rating", which I'm hesitant to do.

Good in theory, but I'm not

Ildar Samit's picture

Good in theory, but I'm not sure I'd agree with this solution to the problem of finding good modules. To me, rating has one purpose: to make maintainers pay attention to the needs of the users. Many ratings is a pretty solution, but it's not practical.

When I'm looking for a module, I'd like to find all of the ones that seem to do the job and try them all. I really think that's the only way. That and reviews/authors' comments on what the difference is between module X and module Y. The problem is going through the sea of modules (big problem at times!), but once you find any particular one, it's pretty obvious if it's been maintained, or if the UI is good (remember, there need to be screenshots and demos!), etc... So again, I don't think this is a solution to any problem I know of.

Terrific proposal

moshe weitzman's picture

I expected to find some objections here but didn't find any :). You've got all the basics covered and I think you can just use own judgement for the fine details. Thanks for moving us forward here.

I encourage you get past your distaste for the Average Rating. Its OK. Inevitable, really. Folks who are discriminating can read own reviews and make own conclusions. Thats what we have all been doing since Amazon launched product reviews in about 1995.

The 4 criteria you mention look good to me. One lacking vector is 'code quality' or 'architecture'. I know that not everyone cares about this, but thats true of Documentation, Ease of Use, and Well Maintained.

I do think that VotingAPI and fivestar are gonna save you some UI work and code to calculate averages. A chat with eaton would clear that up for sure. Regarding averages - I would build in some radioactivity here so that newer reviews count for more than older ones. That acknowledges that modules grow over time and poor reviews at the start do not continue to haunt.

I'll conclude that my dream here is that I can sort projects by the average rating of my friends. In other words, keep the ones I trust and ditch the rest. I'm envisioning one way friendships for this, so I could just friend thoughtful reviewers without their mutual approval. This is all for a future iteration, of course.

kyle_mathews's picture

I've found the above question really helpful in evaluating people's feelings about my product(s). The available answers are:

  • Very disappointed
  • Somewhat disappointed
  • Not disappointed (it really isn’t that useful)
  • N/A - I no longer use this module

This question is heavily used in the product development / startup land. The rule of thumb amongst practitioners is that you need to have at least 40% of respondents answer "very disappointed" before your product / service has achieved product/market fit.

This question would be most helpful for us at distinguishing between the must-have modules and the nice-to-have modules. Modules can rate really high for ease-of-use, documentation, code quality, etc but still not be that interesting or useful of a module generally speaking. We want alpha quality modules that meet a really important use case to be rated higher than a really mature module that has been passed by by recent Drupal development. A good example is the Event module. Really easy to use, well maintained, etc. but I wouldn't be at all disappointed if I couldn't use it because the cck/date/calendar combo is better all around for various reasons.

For more on the questions read:

Kyle Mathews

No such thing

NancyDru's picture

Kyle there is no such thing as a "must have" outside of core. I have built several sites without CCK and Views. Sites that don't have user-submitted content don't get PathAuto. Generally the biggest driver for Tokens on my sites is PathAuto. What makes a module "must have" is the requirements, not its popularity.


bekasu's picture


I'd like a way to get statistics (usage, downloads, size, ratings, etc) from the modules data tables.

While the average person would not need these datapoints, add1sun and I have had discussions about the lack of hard data available to help guide drupal support/documentation.

Pleading on the webmaster forum meets with limited success generally due to other competing priorities.

I volunteer to write extract query(ies) if we could include someway to have them executed when needed.


See also

about votingapi

marvil07's picture

Disclaimer: as one maintainer of vote up/down, my opinion can be pro votingapi ;-)

I think cck number field can do this fine for project review node type, but I want to propose how votingapi can help vs cck number.

IMHO the point where votingapi helps is automating all the calculations we need to do. By default votingapi provide the following aggregation functions available(in places like views fields):

  • count (Number of votes)
  • average (Average vote)
  • sum (Total score)

Actually votingapi use the {votingapi_cache} table to store the voting results when needed.

In the other hand, votingapi let extend the list of calculations we do(hook_votingapi_results_alter()) and present them on the view field configuration for vote results (hook_votingapi_metadata_alter()), so providing the our-own-function-for-average-the-kind-of-votes in the project review type would be as easy as implementing two hooks.

Fivestar already provide a cck field type IIRC, so it would be as easy a creating fields to provide the voting fields.

BTW vote up/down still do not provide a cck field, but just to mention it, since 6.x-2.x vote up/down let extend the voting from other modules. Now the module is one generic piece(vud), three sub-modules(vud_node, vud_comment and vud_term) which extend it and a set of widgets(ctools plugins) that determine the behaviour. So I'm planing also to implement cck integratrion and a generic +N/-N voting for 6.x-3.x(or 7.x-1.x, the one that come first).

What about an option for mantainer response.

rmiddle's picture

As we know reviews sometimes turn into a place were someone is unhappy about there patch not getting accepted. It would be nice if there was a place were a maintainer has the option to reply.


Already in the proposal...

dww's picture

We could also enable comments on the review nodes themselves, to give people a chance to reply to the reviews in some way (e.g. someone writes a scathing review but didn't RTFM or something). This isn't central to the whole thing, but it seems like it might be useful to consider it as an option. Certainly easy to disable later if we don't like it.

I missed that but even if the

rmiddle's picture

I missed that but even if the maintainer can comment is there going to be a way to easily see it is a maintainer reply. But then again that would be very hard to do right.


We probably won't get it

drumm's picture

We probably won't get it exactly right in the first pass. When writing code, you shouldn't do premature optimization; when adding community features, we shouldn't prematurely plan for hypothetical situations.

We should certainly consider these things, but the goal is to get something launched, see how it works, and improve from there.