UMN Testing - First thoughts

Events happening in the community are now at Drupal community events on www.drupal.org.
dcmistry's picture

As we are getting ready to plan things for UMN testing, I am excited and throwing some irons in the fire. As we all know there are plenty of things that need to be ironed out like - what do we test? how many participants do we need? how much are we compensating them? how long should the sessions be? whom should we test - new users or experts or both? Oh, well ! For such and many more questions, we have the meeting next time :) But just to get things started I would like to share what we could possibly test. The intent of this post is to solicit ideas and suggestions and start a conversation.

What to test? It is the million dollar question, isn’t it? With so many things to be tested under the Drupal umbrella, it is going to be a challenge to prioritize. There are several approaches which we could take - the site builder Jeremy and the tasks which he does (at different stages of site building) or that of a site contributor ( who is mostly responsible for managing content) or someone who is both: a site builder and a contributor. With some overlap, each of them have very different roles and tasks associated with them. Also, their view of the website world might be very different. After some initial thinking, may be this study could focus around the first experience of Jeremy because:

  1. The first experience of using Drupal is not an easy one. If we make this process smoother, we are improving the experience of the novice user and encouraging him/her to do more. (As Drupal is getting more widespread, we need to cater to our users who are not the "techy-techy" sorts. By taking the universal design approach, we are catering to other user sets too)
  2. The other approaches mentioned above (Contributor vs. builder) are crucial and need to be tested as well. I am conducting a lot of usability studies which are around those. It would be helpful for us to test features which I have not tested or planned yet. This way, we will be making most of our time and will be able to gather more data. This would certainly help us in inching towards a super D8 UX.

We could bind the study around the scenario that Jeremy just finished installing Drupal and he is excited about creating his site. Although a big enthusiast, he is green to Drupal. Based on this, I have created a few tasks which normally a user would do during his/her first interaction.

Task 1: Now that you have installed Drupal, you would like to take some time and explore the system. Take a few moments and let me know what are your first impressions about the system. (The rationale is that users would most likely do some exploration on the system before actually using it. This way we are making them familiar with the system.)
Task 2: You would like to change the visual look of your website. What would you do? (I believe first time users are intrigued with the appearance of their site. It is very important to them. They might not finalize on the look of the website, but would certainly be interested in knowing what options are available to them.This test is focused on “Appearance”)
Task 3 A: You have heard about modules on Drupal and think that this could be a good resource for your website. How would you explore this feature?
Task 3 B: Based on what you saw, you would like to install a module which allows you to . How would you install this module? (The goal of the task is to understand how they find and install modules. At the end of the study, if time permits we can ask the participate to uninstall a module.)
Task 4: You would like to change your site name and also add a slogan (or a logo) to the website. How would you do that?
Task 5: You would also like to create a section “About me” on the left hand side of the website which appears on all the pages. How would you do that? (This task will cover: creating content, creating a block and positioning a block)

Throughout the test, we could also observe how they use the toolbar and probe on what they think about the terminology. Also note, we will have pre-session and post session questions along with probing questions for each of the tasks.

What do you think?
Looking forward to your comments!

Note: The tasks and scenarios are just drafts !

Comments

What about existing End Users?

jasonsamuels's picture

Just a thought to throw out there about the audience to target: instead of (or in addition to) people entirely new to Drupal, what about testing usability on people who are currently site contributors on existing Drupal (6 or below) sites? The idea being that people who are already day-to-day Drupal end users may have insight into the usability changes of the D7 interface.

One of the reasons I bring this up is because the April Twin Cities Drupal User Group meeting is going to be focused on this audience. See http://groups.drupal.org/node/137304

If there is interest in testing this group of people, the April meeting could be a great recruiting opportunity.

@dcmistry Awesome, great to

cfennell's picture

@dcmistry Awesome, great to see you are picking up this thread. The process-oriented questions you raise here (recruiting logistic, etc) will indeed be addressed at out kickoff teleconference meeting with the U of M lab staff. As for the scenario, I think your "enthusiast but green to Drupal" segment is right on target with what most people have been thinking. Bohjan, I think, has the scenarios from the last round of tests at Baltimore. I believe we were planning on using/adapting those. I'll see if I can dig those up and post them back here. Bohjan, feel free to jump in here if you like.

@jasonsamuels Interesting and valuable project idea, but I think we are probably going to focus on "new" users in this round of testing for the reasons that @dcmistry cites above and because it is kind of the baseline segment against which we tested Drupal 6 in the last round of testing, so we'll have a better sense of our progress between versions (not that this is in any way scientific).

To Jason and Chad

dcmistry's picture

@cfennel Glad to know that we all are on the same page. Yeah, it would be good to go through the scenarios from the Baltimore study.

@jasonsamuels Thank you for sharing that. I will certainly keep this in mind for other future studies we might have. Such user groups can be great with feedback.

Dharmesh Mistry
UX Researcher | Acquia,Inc.

Thanks for kickstarting this

Bojhan's picture

Thanks for kickstarting this discussion. There are a couple of decisions we need to make, ever since we started testing we have changed the direction to ecompass diffrent kind of audiences.
The goal of this test is to validate a lot of the changes we have made, and to discover new area's of attention.

Audience

I believe the audience of this test should be mixed, about 4 beginners and 4 intermediate users. All these users are site-builders (Jeremy's) who have experience using CMS's, and only the intermediate experience with Drupal. I think we will avoid testing true content editors, since there needs are limited to that we need to cover in this test. Additionally its the same audience we used in a previous test, allowing us to kind of compare.

I do not think that testing existing users and there transition to Drupal 7, is a good thing to test. We provide no care for them in this UX transition from D6 to D7. There is no need to test to see this pain, if we know in Drupal 8 - we are not going to care about this either. It's a sad reality, but a very real one that I don't think will change.

Which ground to cover
We will be testing typical site-building tasks, and expanding this a little bit to cover a bit more. Configuring your own admin (dashboard, shortcuts) and building a content type (field ui) with intermediate users.
@dcmistry Important here is that we converse over the details, we should indeed avoid focusing on putting our headlights on the same thing. But I do need to clarify that this is a community test, where the goal is to communicate the whole of Drupal problems and oppertunities.

Tasks
Lets be clear on the goals first, I think the tasks outlined in earlier plans fit nicely to what we want to test here. The only thing we need to do is expand them to cover the new interface elements in Drupal 7.

@Bojhan "4 beginners and 4

cfennell's picture

@Bojhan "4 beginners and 4 intermediate users"

This will be a key agenda item then in our kickoff meeting. I suspect we may have to dig a bit to locate and recruit "4 intermediate users" at the University. That is, not sure we can recruit external people if we have difficulty with our internal list of evaluators (maybe so though); we'll need to work within the process established by the lab for recruiting, but I expect we could make this happen in any case.

@Bojhan, do you happen to have the "tasks outlined in earlier plans" handy? It'd be great to start a wiki in the testing group to keep track of that sort of thing over time.

Agree a transition from 6 to

Noyz's picture

Agree a transition from 6 to 7 is not the right goal here. I think we need to focus on the site builder. We can, and will do a content contributor test at Acquia at a later data.

@bojhan...

4 beginners and 4 intermediate doesn't actually provide accurate data. All methodologies on usability testing require at a minimum of 5 users, but most suggest 6 to 7 which I agree with and put into practice at Acquia. That said, I think you should focus on new users only. If new users get it, experienced users will too. The only reason I could see testing intermediate users would be if you were to ask them to perform intermediate tasks like creating views, or adjusting image styles, etc.

Additionally, I'd challenge you on focusing on "earlier plans." As I said in Chicago, by the time this test comes, we'll have already covered features like menus, content types etc. While I really think we should nail down the plan in our kickoff (as @cfennel keeps calling out), I'm mostly in favor of the test plan Dharmesh outlines because:

  1. Blocks is a fundamental feature. One that we'll be trying to fix here at Acquia very soon, and usage data would be really helpful. Additionally, I think it's mostly untested.
  2. Appearance is different in Drupal Gardnes, and therefor the d7 version will not be tested by Acquia. Plus it's new.
  3. Modules, also a fundamental feature, also have not been tested. Upgrading an installed module is new to d7. Additionally, I think Acquia will end up replacing that page in DG, and therefor it will go untested by us.

Site name and slogan I feel less strongly about. We already changed then in DG because we know people can't find them. It was a huge problem for users. It was such a big issue, maybe we want the community to experience that issue personally.

Let's get our numbers straight

cliff's picture

There is a lot of misunderstanding about the number of participants required to get accurate usability data or, for that matter, valid and useful results. The simple answer is, "It depends."

If a problem we have failed to perceive is serious enough for us to detect it every time a participant encounters it and frequent enough that every participant will encounter it, then we will find it with the first test. Any less frequent than that, and we're playing a game of percentages. There simply is no one right answer.

Part of the equation depends on our perception: Are we sharp enough to recognize problems the participant is having, perhaps even when the participant himself doesn't realize that he's encountered a problem? To this extent, we can improve our detection rate by improving our skills.

The other part of the equation is simple frequency: Will the problem arise for every participant, no matter what she does? If not, how frequently will it arise? The less frequently it arises, the less likely we are to detect it. To this extent, we can improve our chances of detecting it by including more participants.

And the real truth of the matter is that we would need to run tests with scores, perhaps hundreds, of participants before we could make informed decisions about all but the simplest features of Drupal. Why is that? Because building a website with Drupal is a highly complex process.

With a simple process, such as registering on a website, we should be able to detect meaningful problems well before the numbers reach statistical significance. For example, we should be able to design a highly usable registration process without running 30 iterations of the same test.

But as the process we observe grows more complex, we will need progressively more observations to recognize important differences and trends. When participants have many options and must process a fair amount of information before deciding what they need to do, statistical significance can be a useful guide. It increases our chances of paying attention to real results and ignoring background noise. But it still is no guarantee of accurate data.

Look at it another way: Run the test with 10 participants, and assume we discover that the location of a button stymies two of them. Do we ignore their problem, simply because it didn't affect the other eight participants? (Would that be "Designing for the 80 percent"?)

If we were to take that approach, we wouldn't be very bright. Instead, we should use what we know about usable design, human behavior, and our intended audience to make a judgment call.

And with the next 10 participants, we should test a modified system, and then ask ourselves whether the results suggest that we have, in fact, improved things enough to have the system we need.

blocks

catch's picture

I'm not convinced about testing blocks at all:

  • we already tested them several times before during previous formal usability testing, the results were not good - so it is definitely not 'mostly untested'.
  • there are plenty of known issues with the current UI from both testing and anecdotes (doesn't scale to large numbers of blocks, no preview etc.), I don't think we'll find very much new data compared to other areas of core.
  • we have not spent much effort trying to make incremental improvements to the blocks UI in Drupal 7, and I doubt that will happen in 8.x either - because everyone is hoping we can ditch it and start more or less from scratch at some point. So there will not be much change since previous test runs, nor immediately actionable results either.
  • there are numerous competitors to the blocks UI in contrib that are in widespread use, this is not the case for many other interfaces in core.

Appearance and modules I agree are good candidates though.

Hm, I can actually think of a

David_Rothstein's picture

Hm, I can actually think of a few incremental improvements to the blocks UI in Drupal 7 (moving the "highlighting block regions" feature to a separate demo page, adding the region dropdown to each individual block's configuration page, etc).

We probably don't want to focus on the blocks UI specifically, but any reasonable test is likely going to hit blocks at some point (e.g. if we ask them to add links to a menu and put the menu on their site), and that's probably a good thing. Even if people want to rip out the blocks UI and start over in Drupal 8, having more data on what does and doesn't work in the current UI would be useful to help design what the new UI should actually look like.

@Noyz Thanks for the

Bojhan's picture

@Noyz Thanks for the feedback, I would definitely favor expanding our participant amount. I think this is a agenda point, since it largely depends on who can actually be recruited (as chad noted, intermediate users might be though).

I think its important to recruit intermediate users, because we want to cover interfaces like Field UI, image styles and other more complex site-building tools. We have not tested them before, as we focused primarily on more basic tools like blocks, menu's and taxonomy. Not to say, those tools are good or anything like that - but its unlikely we can get to things like field ui, image styles fully with beginners.

However given your comments. We can focus more on beginners, if you are going to cover the same ground. I hope its understandable though, that unless you are able to share the testing you do in a way thats useful for the community, e.g sharing the videos. We cannot steer this test plan on it. I would love to see more thoughts on how you will do this.

If we can come to agreement on this audience question, its probably more fruitful to talk about actual tasks.

@chad I have published both test plans, its probably best to start wiki's on them :
Beginner: http://docs.google.com/View?id=dg8ph5v8_16d6fns4c7
Intermediate : http://docs.google.com/View?id=dg8ph5v8_17gwzckxfj

We'll definitely be posting

Noyz's picture

We'll definitely be posting videos back to the community.

In Chicago, didn't we agree that Jeremy was the target?

BTW, I really like Liesa's suggestion. I've never seen that done in a lab on an entire site though? Anyone else seen that? We have done similar tests in Gardens using the themebuilder ("recreate this theme"). It works really well, but again, that's focused on a feature.

Great to hear :) We did agree

Bojhan's picture

Great to hear :)

We did agree that Jeremy was the target, given that you will be sharing the videos (can you cover more details which interfaces/flows you will test?) - I think its oke to recruit beginners (who idealy fit the Jeremy persona).

I really like the idea of leisa too. In Baltimore and a few of the tests we did at camps we got feedback from a lot of people, that they had difficulty imaging what they where building - as we where guiding them through the tasks. So it might be a great idea to actually give them a tiny description, wireframe and sitemap.

"Guiding them"?!?

cliff's picture

You were guiding them through the tasks?

Then you weren't running valid tests. No matter what changes those tests inspired, the results were largely invalidated by that procedural error.

What can be done to avoid making that mistake this time around? Anything?

It doesn't take an expensive lab to run meaningful and useful tests. It takes a valid procedure, a trained moderator, a representative group of participants, and skillful observers. With that, you can run meaningful and useful tests with a laptop, perhaps a webcam, and Skype. Add GoToMeeting, Camtasia, or Silverback, and you can record it, too.

An less task based structure?

leisareichelt's picture

So, I had a thought about the structure of the test ... I think there are pros and cons to it but thought i'd suggest.

What if instead of defining a bunch of tasks we just gave the participants a small (very small) project.
We make a teeny website in Drupal, maybe give them wireframes / sitemap then just ask them to try to make the same site using Drupal.

If they totally fail, we step in and guide with tasks.
Could be a great way of uncovering mental models - which I suspect are actually pretty key to the problems ppl have getting started with Drupal.

Should also uncover more easily actionable issues as well.

leisa reichelt - disambiguity.com
@leisa

Absolutely the right approach!

cliff's picture

Leisa, this is a tremendous insight. A valid usability test needs to start with the information a person who doesn't know Drupal has — the content and wireframes of their own or a customer's website — and see how far they can get on that project with Drupal itself.

We should not prompt the participants under any conditions. If they haven't a clue as to what to do, we should have a contingency plan — for example, take them to a specific admin screen and give them a specific task to accomplish on that screen.

Valid usability testing is a lot like giving an eye exam. You put the participant in a number of situations and observe what they can do. You don't suggest that what seems to be an "E" is really a "C."

And I wouldn't worry about comparison to the prior test results. If the purpose of these tests is to "validate," we're wasting the participants' time for the sake of stroking our own egos.

The real purpose of usability testing is to find out whether Drupal works well for an intended audience. We know where we want to be, and the real issue is whether we have gotten there — not whether what we have today scores any better than what we had three years ago. Struggling in a swamp, we shouldn't be satisfied with finding a dry spot. Our goal is to get out of the swamp entirely.

An excellent tool

mpearrow's picture

Especially in the case of people who have never used Drupal before, it can be hard to derive a meaningful set of tasks. Part of the usability barrier to Drupal currently is that it has its own "way" that requires acclimation.

A set of complete novices will not have the chops to dive in and start adding views, etc. but it would be extremely valuable to catalog the actions and reactions of novices to Drupal when handed a broad task... such as happens to many newcomers to Drupal! "Hey, we need a new departmental web site, and we heard that the IT department likes Drupal, so go figure it out" etc.

This sort of exploratory testing - as opposed to classical usability testing with its tasks and metrics - can help flush out assumptions that novice users have about what a system should be and how it should behave, as well as assumptions that the Drupal community has about what is "obvious."

Since the current UMN testing is already knee-deep in planning I'm not sure how hard it would be to retool to take such an approach, but I think doing this sort of work is generally a good idea and I would be glad to set something up here at MIT, maybe with a non-student sample (e.g. staff / admins who have to learn drupal for their Professor's site / lab site /departmental site)

@Bojhan and @Chad Just a few

dcmistry's picture

@Bojhan and @Chad

Just a few other questions:

  1. Are we testing for all 3 days (May 17-19) or does it include the analysis time? (The reason I ask this is because this would affect how many people we can test)
  2. Is the intent to test all the tasks mentioned in beginner/intermediate documents or is it a starting place to begin with? ( Usability sessions are usually 60 minutes and they work very well. Sessions longer than that could make the participant less engaged and less focused. 90/120 minutes or longer sessions are not alien but you have to be mindful of the cognitive load to the participant. The tasks mentioned in the documents seem to be impossible to complete in 60 minutes.)

Dharmesh Mistry
UX Researcher | Acquia,Inc.

@dcmistry w/r/t your first

cfennell's picture

@dcmistry w/r/t your first point, day one and two are testing days with very brief analysis following each evaluator (just identifying issues really), and day three is a half day of wrap-up analysis. We'll have a space for the group to convene after the lab staff kicks us out though, so we can continue to post issues and/or do analysis.

@dcmistry 17, 18th is

Bojhan's picture

@dcmistry

  1. 17, 18th is testing and 19th is analysis. I expect we will do some analysis in between sessions, at least some "writing stuff down clearly". During the scheduled meeting we can perhaps discuss our analysis method, perhaps doing some affinity diagramming.

  2. Sure, a lot of the tasks beyond 5-6 are - in case they get to that point. In the tests we did, we only had a few reach task 9 or 10. The tests we did lasted for an hour, with 10min pre-questionaire and 10min post-questionaire.

@Leisa, Bojhan and Chad

dcmistry's picture

@Leisa: I like your idea, actually a lot. However, I have some concerns if this would be an appropriate path to tread for this test. I would be happy to share my concerns during the meeting.
@Bojhan and @Chad: Thanks that gave me some clarity.

Dharmesh Mistry
UX Researcher | Acquia,Inc.

Moving forward

dcmistry's picture

Just a brief summary from the meeting:
- To test 8 participants for 75 minutes each
- Participants will have no experience in Drupal
- Based on the current situation, we will be targeting undergrad and grad students in computer science/ design at UMN.
- We are awaiting response from IRB for the videos

Now that we have ironed out a few basics, we can now focus on the tasks. So, lets pour in thoughts. Some of us have suggested tasks and ways to test. I think it would be a good idea to take this as the starting block.

Dharmesh Mistry
UX Researcher | Acquia,Inc.

Tasks I would like us to

Bojhan's picture

Tasks

I would like us to specificly focus on site-building activities, tasks that come to mind are :

Menu & blocks
- Building a menu structure
- Adding this menu to the site (blocks)

Taxonomy
- Creating a taxonomy, and using this with a content piece

Content types & fields
- Creating a new content type
- Adding a image field to this content type

Appereance
- Changing the theme

Configuration
- Changing the site name
- Allowing visitors to register without admin approval

Dashboard
- Adding a block

Shortcuts
- Removing shortcuts

The way that we do these, I think wireframes could definitely be a way. I am not sure if we are confident enough this method works? But it seems like a good thing to pursue.

My thoughts

dcmistry's picture

Thanks, Bojhan! Looks like we are off to a good start.

My thoughts:

  • Content types and Fields: This is a powerful feature for Drupal and would be great to be tested. In fact, I did some initial testing on content types in Drupal Gardens. However, my concerns with testing this here are:
    o Our target population for the test is users with no experience in Drupal. Creating content types is an advanced feature and typically it is for users who understand the basic working of Drupal and are ready to take it to the next level. The first time users are on a different learning track.
    o We have 75 minutes allocated per session. Creating custom content type is a time intensive test and squeezing it with other tasks might yield us less qualitative and consistent data.
  • Dashboard and shortcuts: I like the idea of incorporating dashboard and shortcut tests. I think the underlying question is: Do the users see the value in these tools and would they use it rather than if they are able to accomplish a task.

Thoughts?

@Bojhan: Could you say more about wireframes that you are referring to?

Dharmesh Mistry
UX Researcher | Acquia,Inc.

chrisco-on-twitter's picture

I am a Drupal newbie have been researching and reporting some of my first impressions here. I apologize that my tone is somewhat "aggressive," it is not my intent to do anything other than provide a true, blue, un-edited, real-time account of impressions in case it might be useful. As a said in a comment, the more I learn about Drupal and meet and interact with the community, the more I like:

"Wake up community - Wordpress.org should scare you!"
http://groups.drupal.org/node/136294#comment-455844
(I linked to my first of multiple comments. And, of course, there are many other more useful comments on that discussion.)

Test plan

dcmistry's picture

I started with a draft for the test plan in the "Usability Testing Group" . Please look at it and suggest changes or edit :)

Dharmesh Mistry
UX Researcher | Acquia,Inc.

Point of clarification

cfennell's picture

Remind me, were we actually planning on having our evaluators (try to) download and install a module, or to just have them talk through how they would do so, or give them a choice, or...:

"Task 5: One of the members of the conference is insisting you to have a calendar or twitter feed (not decided yet) on the website. How would you go about doing that?
Focus: Install a module"

Maybe just getting to the FTP credential page to download the module in question is where the task ends? If we want them to do the actual download, we'll need to be sure to get FTP/credentials ready for users on our test server.

Thanks.

The download/upload happens

David_Rothstein's picture

The download/upload happens before you ever are asked to type in FTP credentials, so we can get pretty far without that.

Also, we could choose to configure the test server so it doesn't require FTP credentials at all but rather jumps right into the installation (i.e., if the webserver owns the Drupal files). From what I've seen anecdotally, that seems to be a much more common configuration that people are hitting since (unfortunately) most shared hosting is still set up that way.

Another point: The second most common configuration I think people are hitting is where Drupal asks them for FTP credentials but they don't have them because their server isn't set up for it (in other words, they really won't ever be able to install modules via the UI at all, only via the filesystem like in Drupal 6). This is a known usability issue that we tried to mitigate with help text... but it would be interesting to test. In that case, we just don't give them any FTP credentials at all :) I guess it would be somewhat pathological to set things up in a way where we know they can't complete the task (and hard to do that in a realistic way), but it's actually a very common situation in reality.

hmm

cfennell's picture

Thanks for the overview, @David_Rothstein, very helpful.

So, to summarize, what we would need to decide then is whether or not to make the web server own the Drupal files. We'll need to get this sorted soonish so the test server can be configured.

Given that we can only test

David_Rothstein's picture

Given that we can only test one setup, I think my vote would be to have the webserver own the Drupal files.

That seems like the most common scenario for the audience that this part of the UI is aimed at, plus setting up FTP credentials and then giving them to the test subject is probably a waste of time; if someone has a username and password sitting in front of them on a piece of paper, I don't think we gain much from a usability perspective in watching them type it in :)

So this basically means two things for the test servers:

  1. Most likely, someone will need to run chown -R www-data:www-data (or the equivalent) on the Drupal docroot to prepare it for the test.
  2. Since we're presumably testing with the overlay module on, we're going to want to apply the patch at http://drupal.org/node/936686 to the testing servers (or even better, get it committed to core before then!) or otherwise people are likely to hit some nasty bugs if/when they upload a module.

The Current (draft) Test Plan

Some thoughts

mpearrow's picture

Hi all,

I'm very excited to see this work going on and I think it's absolutely essential. Usable systems sometimes suffer from a lack of power, and vice versa, but I think there is a real opportunity here to make Drupal not only the technically best system but also the sleekest and easiest to use.

wrt task #5, the user can download the module without any credentials but to install it, will need credentials for an authorized user on the webserver - right?

Each of the tasks should ideally have concrete "finished states" so you know when the user has completed them. For example, task #2 has this mostly codified - create the About Us page via Create Content and add to the Primary Links menu via the Menu Settings option. Particularly if you want other testing sites to replicate your work, imagine when writing the task description that someone who has not been involved at all in the development of this plan has only your description to go by.

Each task should also have a realistic time limit. In a lab setting, people will attempt to figure out a task waaaaaay beyond the amount of time a real-world user would have given up in frustration. So if you're looking at number of tasks completed as a metric, it's important to control for this hot-seat factor.

Keep in mind that the results of this testing are likely only generalizable to the particular strata from which the participants are drawn - CS students. You can still gather a lot of really useful observations no matter what the group of participants happens to be, but for reporting purposes you want to be careful about making inferences about the "general" drupal user population.

Usability

Group organizers

Group categories

UX topics

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds: