Test granularity

Events happening in the community are now at Drupal community events on www.drupal.org.
sdboyer's picture

I'm working on a project that does some fairly crazy extensions to drupal core functionality (biggest single item right now: $user goes from a stdClass object into an actual classed object, and I use OO fun to make a seven-way distinction between different user 'types'), and have resolved to basically take a TDD approach to driving the site forward, given the complexity and breadth of these changes. As a result, I've finally gotten around to writing my first few simpletests, and it's been a real pleasure - even though I'm 'still' on d5, which is evidently very much a yester-year thing with simpletest. (Note that I did, at least in the case of the user-related tests, check to ensure that the same basic approach is still utilized in the d7 user tests, and it is).

In writing up some of my user-centric tests, I initially worked from the existing tests that had already been written for drupal's user system. It became clear to me pretty quickly, though, that I was going to need to refactor a few things: I needed to deal with changes to my registration form, changes in the way that login redirects work, differences in the how & where of my account management/password changing page, and other things of that nature. I found, though, that that basically entailed my copy/pasting then reworking a lot of the code that was there, rather than being able to programatically advantage of what had already been written by one or another of the mechanisms that OO provides. This got me thinking about test granularity, at which point I quickly hit up webchick for her thoughts on how & why we promote code reusability and tests that operate at different scopes. The short version of her answer was, "We're still figuring a lot of that out." So I figured I'd come by here and share some of my thoughts on the subject.

Let me just be reeeeally clear - I'm not looking at this as a criticism of the work that's currently going on in pursuit of 100% core coverage. We need tests that work and cover the basics before we go gallivanting off into this territory, IMO.

I knew that I was going to need at least one layer of abstraction, as I've got seven different user types, most or all of which need (or will need) to be verified independently on their own run-through of the tests. So i figured I'd stick an extra class extension in between DrupalTestCase and my individual test cases. I called it DrupalUserTestCase, and the source is here. That class is then extended for each individual user type, at the moment with very little variation. Warning: though the code 'works' - the six user types generate 323 passes in total - I consider that code a very sloppy first-round, and am quite aware that it doesn't fully achieve or illustrate what I'll discuss later in this thread. Also, there's some irrelevant business logic in there - and yes, I'm aware that I revealed the static 'staff validation code'. That system will be replaced and gone by the time the site is live.

Anyway, the bottom line is that the actual test method has very few direct assert*() calls in it. Let me just reproduce it here, so that you don't have to go tab-switching:

<?php
 
function _testUserRegistration() {
   
// Register our new user.
   
$this->ftestRegisterUser();
   
   
// now we check database fields
   
$this->test_user = $this->utestLoadUser(array('mail' => $this->current_test['mail']));
   
   
// Sanity check before proceeding further.
   
if (!isset($this->test_user->uid) || ($this->test_user->uid == 0) || !$this->test_user instanceof DrupalUser) {
      return
FALSE;
    }
   
   
// Attempt to login with the wrong password
   
$this->ftestUserLoginBadPassword();
   
   
// Use the password reset function to login.
   
$this->ftestPassResetLogin();
   
   
// Change the password via a form.
   
$this->ftestChangePassword();
   
   
/* logout */
   
$this->clickLink('Log out');
   
// $this->assertNoText($user->name, 'Logged out');
   
    /* login again */
   
$this->ftestNormalLogin();
   
   
$this->purgeTestUser();
  }
?>

I got lazy with the logout and just did it straight in the test method, but other than that, I'm hoping the basic idea is clear: I've abstracted the individual (more 'unit-test' size) tests into smaller, more portable pieces that are independent of the individual simpletest I've opted to run. The tangible results are quite nice - as I said, 323 passed tests, and I've got one of the essential areas in my user system covered across all the different user types, using just the one parent class and minor variations in the subclasses. Perhaps best yet, it'd be trivial to add more user types (although it's very unlikely we'd do that).

It's not that I'm thinking that this kind of abstraction & encapsulation is a new idea around here. It just seems to me that we could be doing a lot more of it, and in a lot more of a structured way. Let me pick on DrupalWebTestCase::drupalCreateUser() for a sec as an example:

<?php
 
/**
   * Create a user with a given set of permissions. The permissions correspond to the
   * names given on the privileges page.
   *
   * @param $permissions
   *   Array of permission names to assign to user.
   * @return
   *   A fully loaded user object with pass_raw property, or FALSE if account
   *   creation fails.
   */
 
function drupalCreateUser($permissions = NULL) {
   
// Create a role with the given permission set.
   
if (!($rid = $this->_drupalCreateRole($permissions))) {
      return
FALSE;
    }

   
// Create a user assigned to that role.
   
$edit = array();
   
$edit['name']   = $this->randomName();
   
$edit['mail']   = $edit['name'] . '@example.com';
   
$edit['roles']  = array($rid => $rid);
   
$edit['pass']   = user_password();
   
$edit['status'] = 1;

   
$account = user_save('', $edit);

   
$this->assertTrue(!empty($account->uid), t('User created with name %name and pass %pass', array('%name' => $edit['name'], '%pass' => $edit['pass'])), t('User login'));
    if (empty(
$account->uid)) {
      return
FALSE;
    }

   
// Add the raw password so that we can log in as this user.
   
$account->pass_raw = $edit['pass'];
    return
$account;
  }
?>

Great, we've got a new user - nice and clean, and works well if we just want it as test user to act as a temporary agent for doing other things during over the course of our test. But the limitations for our use case became pretty apparent pretty quickly:

  1. I need to be sure that user_save() (and user_load(), as well) is giving me the results I expect it should. If we're just talking about testing core, then there's little to no reason why this would be a concern - but if we're testing a module, integration between multiple modules, or an actual production site, lots of funky things could happen during these processes that make a compelling case for hiding away unit tests for user_save() and user_load() behind a clean OO interface.
  2. Modifications to the user registration form are exceptionally common things for production sites, and fairly common things for user-focused contrib modules. It seems to me that for these core functions to really be scalable, we should make the effort to make them easily capable of replicating the site environment in which they're being tested. Put another way, why should we have multiple totally separate methods for creating users - wouldn't it be better practice to separate out pieces of the logic into individual sub-methods and construct our user-creation method on the fly, based on parameters set by the simpletest being run?

OK, I think I lumped a bunch of ideas together and could have used more numbers in that list, but the point's been made. Hopefully the principle has also been illustrated. In case it hasn't, I'll put it bluntly: granular unit tests that we combine into progressively more complex functional/black-box and/or integration tests would be a fantastic way to improve the extensibility, flexibility, and power of our testing framework. Now I'm moving on to implementation, because that's th real challenge.

I think there are a lot of competing needs when it comes to testing, and I'm quite sure I haven't thought of all of them, but the idea congealing in my mind that makes the most sense thus far is a version of the 'Visitor' pattern. The basic idea is that we write ourselves up some classes that are chock-full of these atomic unit tests, are sensitive to configuration options set in/passed by the currently active test, and then allow tests to instantiate the libraries as needed, possibly even transparently. Here's what one of these libraries, and an implementation of that library, might look like: (You're REALLY warned about spotty code on this one - I wrote this cold, no testing, so it's surely fabulously full of holes; again, just trying to illustrate the concept).

<?php
interface DrupalTestLibrary {
  public function
__construct(DrupalWebTestCase $host_test);
}

class
ExampleUserTestLibrary implements DrupalTestLibrary {
  protected
$test;
 
  public function
__construct(DrupalWebTestCase $host_test) {
   
// Create a reference to the current test so that we can invoke its
    // assert<em>() methods directly from library methods. I'm not familiar with
    // how the $test_id system works, though; a quick glance makes me think we
    // could just use that and run the tests directly from the library itself.
    // If we do that, then the class declaration becomes:
    // class ExampleUserTestLibrary extends DrupalWebTestCase implements DrupalTestLibrary
   
$this->test = $host_test;
  }
 
  public function
loadUser($load_args, $test_args) {
   
$this->checkArg($load_args, 'load_args');
   
$this->checkArg($test_args, 'test_args');
   
$this->doUserLoad($load_args, $test_args);
  }
 
  public function
testUser($user, $test_args) {
   
$this->checkArg($test_args, 'test_args');
   
$this->checkArg($user, 'user');
   
$this->testUserLoad($user, $test_args);
  }
 
  protected function
checkArg(&$arg, $which) {
   
// By allowing the test to get its args from either passed-in parameters or
    // values saved in the current test object, we allow for quite a bit of
    // flexibility in terms of allowing our library tests to perform operations
    // that are separated from the current data in the test object - or they
    // can also daisy-chain together quite nicely if they all fall back to the
    // arg data saved in the current test. No need for crazy parameter passing.
   
if (is_null($arg)) {
     
$arg =& $this->test->$which;
    }
  }
 
  protected function
checkUser(&$user) {
   
// In my system, I've created an abstract class at the top of my user class
    // inheritance hierarchy called DrupalUser; I'm only including it here to
    // indicate that the parameter must, in fact, be a drupal $user object :)
   
if (!$user instanceof DrupalUser) {
     
$user =& $this->test->test_user;
    }
  }
 
  protected function
doUserLoad($load_args, $test_args) {
   
$user = user_load($load_args);
    if (
$this->test->perform_unit_tests['loadUser']) {
     
$this->testUserLoad($user);
    }
    return
$user;
  }
 
 
/**
   * Since these libraries are classes, and therefore inherently extendable, it'd
   * be easy for people to write subclasses that test additional $user elements,
   * depending on the needs for their particular module or production setup.
   */
 
protected function testUserLoad($user = NULL, $test_args) {
   
// One glaring problem with this is that the assert</em> statements are all
    // protected methods, and so can't be called from the library directly.
    // There are ways to take care of that, of course - the easiest one being
    // giving the libraries the same test id as the test class that instantiated
    // them. If we do that, then this becomes more of a 'Composite' pattern.
   
$this->test->assertTrue((!empty($user->uid) && is_numeric($user->uid)), 'user->uid set');
   
$this->test->assertTrue(($user->uid > 0), 'uid > 0');
   
$this->test->assertEqual($user->mode, 0, 'Checking mode field');
   
$this->test->assertEqual($user->sort, 0, 'Checking sort field');
   
$this->test->assertEqual($user->threshold, 0,'Checking threshold field');
   
$this->test->assertEqual($user->theme, '','Checking theme field');
   
$this->test->assertEqual($user->signature, '','Checking signature field');
   
$this->test->assertTrue(($user->created > time() - 20 ), 'Checking creation time.');
   
$this->test->assertEqual($user->status, variable_get('user_register', 1) == 1 ? 1 : 0,'Checking status field');
   
$this->test->assertEqual($user->timezone, NULL, 'Checking timezone field');
   
$this->test->assertEqual($user->language, '', 'Checking language field');
   
$this->test->assertEqual($user->picture, '', 'Check picture field');
    if (isset(
$test_args['name'])) {
     
$this->test->assertEqual($user->name, $test_args['name'], 'Checking name of user');
    }
    if (isset(
$test_args['mail'])) {
     
$this->test->assertEqual($user->mail, $test_args['mail'], 'Checking e-mail address');
     
$this->test->assertEqual($user->init, $this->current_test['mail'], 'Check init field');
    }
  }
}

/**
* I'm only sticking this in here because we'd need to add some methods to
* DrupalWebTestCase in order to facilitate management of the libraries.
*
*/
abstract class ExampleTestCase extends DrupalWebTestCase {
  public
$perform_unit_tests = array('loadUser' => TRUE);
 
 
// Doing it this way means you'd have to load all your libraries at the
  // beginning. We could work out a lazy-loading system, of course, but I
  // figured writing that up for this example would be...uh...excessive? :P
  // Doing so would probably help justify the intermediate methods, though.
 
public function __construct($test_id = NULL) {
   
parent::__construct($test_id);
   
$libraries = $this->loadLibraries();
    foreach (
$libraries as $library_name) {
     
$this->getLibrary($library_name);
    }
  }
 
  abstract function
loadLibraries();
 
 
// Final because, when done properly anyway, this method should be flexible
  // enough for any drupal testing library we want to add, and allowing for other
  // mechanisms of adding libraries would probably lead more to code spaghetti
  // than good tests. But feel free to shoot me down :P
 
final protected function getLibrary($library_name) {
   
$library = new $library_name($this);
   
// Yes, this limits this test to only one instance of a library per test.
    // Easily expanded if needed.
   
$this->libraries[$library_name] = $library;
  }
 
  protected function
loadUser($load_args = NULL, $test_args = NULL) {
    return
$this->libraries['ExampleUserTestLibrary']->loadUser($load_args, $test_args);
  }
 
  protected function
testUser($user = NULL, $test_args = NULL) {
   
$this->libraries['ExampleUserTestLibrary']->testUser($user, $test_args);
  }
}

class
ExampleTest extends ExampleTestCase {
  public function
get_info() {
   
// filler
 
}
 
  public function
loadLibraries() {
    return array(
'ExampleUserTestLibrary');
  }
 
  public function
testUserThingie()
   
// On this run, we load a user and check that all the attributes are what we
    // expect them to be.
   
$this->test_args = array('mail' => 'uid500@simpletestuser.net', 'name' => 'uid500');
   
$this->load_args = array('uid' => 500);
   
$user = $this->loadUser();
   
// On this run, though, we just load the user - no tests, because we've shut
    // off testing on this run for the user_load test.
   
$this->perform_unit_tests['loadUser'] = FALSE;
   
$user = $this->loadUser();
   
// Now though, we can run that same unit test from here:
   
$this->test_user = $user;
   
$this->testUser();
  }
}
?>

Most of the in-line comments speak for themselves, so there's not a lot to say in summation. The bottom line is that we get an enormous amount of pretty fine-grained control in our actual test function, using only a few setting switches here and there. That 'testUserLoad' unit test in the library can be used to perform an exhaustive test on a $user object for a lot of different circumstances, but since the libraries are extensible, changing around how that particular test works is as simple as plugging in a different library (or an additional library, and running both tests). We could also consider writing the library classes as Singletons, which could have some other interesting benefits.

This post ended up being much longer than I originally intended it to be, so sorry about that. I'm really just hoping to get a sense of what/how the testing community feels about integrating this type of approach with our existing approaches. Obviously, I think it'd be a boon :)

Comments

I think you've made some

dlhubler's picture

I think you've made some maintainable tests using the current test framework.

Some comments
* I actually try to have assertions in test method body. That can be hard, but it usually means API needs to be simplified instead of test refactored. For example your most readable method is "testUserLoad" but it takes parameters, now I need to go back and look at where its called to track down if an assertion failed in that method.
* Once you bring in OO, you ultimately can test things in true isolation and may not need a complex hierarchy of unit test subclasses.
* It's somewhat lipstick on a pig. The ultimate solution in my mind is to have 2 categories of tests: low-level and high-level. Also called unit tests and functional tests in many circles. Low-level ones extend plain old UnitTestCase to test code in isolation even w/o a database if you're lucky. Today this is impossible w/o runkit or until more code is rewritten in OO, a direction you are debuting here. High-level tests use something like selenium and are less concerned about every user level field, but test page flow. Drupal current investment has been in one layer of tests and it's somewhere between low-level and high-level which I cannot criticize too much because of the challenges I pointed out.

dlhubler has a good point

catch's picture

dlhubler has a good point about readability. It's nice to go into a failing test, find the line, and see that correspond directly to a drupalGet() or drupalPost(). However, like I said on irc, centralising some common tasks like comment creation (which is currently implemented on a test by test basis) would help a bit - in the same way as drupalCreateNode() is handy.

Yeah, the assertions in the

sdboyer's picture

Yeah, the assertions in the test body are a very real, very substantial advantage when it comes to immediately understanding where you are in the process of a test. If we were to employ anything here, I think it'd be really crucial that we figure out a way to make the assertion reporting tie in clearly to the library that it comes from, and the test that individually called it. Shouldn't be too bad, it's just stepping back in the call stack a bit.

I think you've made some

sdboyer's picture

I think you've made some maintainable tests using the current test framework.

Great! That's really the basic goal here - a more flexible & maintainable testing framework for the Drupal we have, not the Drupal we could/should/might/will have =)

I actually try to have assertions in test method body. That can be hard, but it usually means API needs to be simplified instead of test refactored. For example your most readable method is "testUserLoad" but it takes parameters, now I need to go back and look at where its called to track down if an assertion failed in that method.

Yep, having to backtrack through the call to figure out where you are is really not optimal for testing. In fact, it's really not optimal to the point of being quite detrimental. Doing things this way would pretty much necessitate adding in a layer that grouped the results of assertions made by libraries sequentially on output, so that it's immediately easy to tell which invocation of a given library unit test resulted in a fail.

Once you bring in OO, you ultimately can test things in true isolation and may not need a complex hierarchy of unit test subclasses. It's somewhat lipstick on a pig...

It's true. Quite true. On the other hand, I think it also brings us back to testing for the Drupal we have, not the Drupal we could/should/might/will have. I'd be all for figuring out ways to reduce the complexity of the class hierarchy, but I don't know how practical and/or feasible shifting many of the crucial pieces of core to OO (let's not even think about contrib) would be.

Even if we do, though, I'm not sure I agree that it's just lipstick on a pig, much though I love the expression. From a pure coding elegance perspective, i'm pretty much in agreement, but I think that a lot of the substantive benefit here is human/social, not technical. I think this could potentially move us in the direction of being able to generate a testing API that is documentable and usable by drupal devs of every skill level - which I believe is one of the main obstacles to testing become a more widespread phenomenon. Present a clean & relatively lean interface that expresses itself in similar terms to the way that people are likely to be thinking about testing, and who knows what could happen. We've got FAPI. Dare I say...TAPI?

Heh, I wasn't sure whether to use the unit test/functional test, though I did mention it there at the end - a lot of this is new territory for me, especiallytesting, so I wasn't sure if those were the right terms to use to describe this. But yeah, I think we're pointed at the same goal - two basic categories of tests, low-level and high-level. It seems to me that the low-level tests are always going to be the ones that demand more expertise and experience to write well, which is a big reason why I think this libraries approach may have legs: if we're explicit about the low/high distinction, then we know that our high-level tests should want to use our low-level tests, which in turn means that we should store those low-level tests so that they're portable and easily called up by the high-level tests - and so that they're easily maintained by the people with the greatest knowledge of the intricacies of what they're testing.

I think it also brings us

catch's picture

I think it also brings us back to testing for the Drupal we have, not the Drupal we could/should/might/will have.

One of my many hopes for testing, is that it'll force some refactoring of gnarly old dusty functions because they're hard to test. How much this happens remains to be seen, but it's one argument for not making things too easy on the testing end to work around shortcomings in core (not that this is what you're suggesting).

For sure, and I share the

sdboyer's picture

For sure, and I share the same hope. In fact, I'd go a step further (well, this may be what you have in mind, too :P) to say that testing might be one of the few methods available that will allow us to promote better code quality through technical, rather than social standards. If you want devs to listen, speaking to them directly through their code & their workflow will always be more effective than speaking to them person to person. At least, I know that's true of me, and I'm a reeeeally socially-oriented guy :)

Thought a little bit more

sdboyer's picture

Thought a little bit more about your response here, and I'm curious - what's the problem with tests taking parameters? I can picture a few answers, some more legit than others. The one that stands out the most in my mind is a line of thinking that says any test which takes parameters is inherently not a unit test - assuming that that variable data is used to control the flow of the test. I don't think that makes parameters an inherently bad thing, though, as long as they're not being used to control what actual tests are being performed, but instead are used to vary the sample data which is being tested. The example is hardly elegant, but that's basically what its doing - if some values are passed in for testing, the test assumes they've been properly built and runs them through the checker. If not, then it tries to get its data from a location in the $test object that it knows will have the relevant data (if there's any to be had). Maybe what's missing there is some kind of 'critical fail,' where the test errors out completely if it doesn't have the necessary data to run the tests properly.

That's the only clear-ish idea that jumps to mind, though, so please do tell - what's your reason for disliking the fact that it takes parameters?

You get immediate context

dlhubler's picture

You get immediate context when interpreting the assertion failure

function testAdd() {
  $this->assertEqual(2, 1+1);
  $this->assertEqual(2, 3+(-1));    <-- Failure on this line
}

v.s.

function testAdd() {
  $this->assertThingsThatAddToTwo(1+1);
  $this->assertThingsThatAddToTwo(3+(-1));
}

function assertThingsThatAddToTwo($value) {
  $this->assertEqual(2, $value);                            <-- Failure on this line
}

From the second example, you cannot tell where where the error is w/o looking at the stack. Someone mentioned somehow getting simpletest to report the error as if it happened on the first function, in this case, that would address the issue. But in other cases when you're asserting many different values, it doesn't.

I've done it, I try to avoid it when i can. Here's an example of a test, little combination of both.
http://sipxecs.sipfoundry.org/rep/sipXecs/main/sipXconfig/neoconf/test/o...

Ahh, ok, so it's really the

sdboyer's picture

Ahh, ok, so it's really the same issue as most folks have been raising - losing that immediate context for assertion failures and needing to consult the stack to figure it out.

Someone mentioned somehow getting simpletest to report the error as if it happened on the first function, in this case, that would address the issue.

Yep, me. My first response to catch, above. So to flesh that idea out a bit, as it clearly is the main blocking concern here...

Now that I've reflected on it further and gotten some feedback, I personally wouldn't advocate for any of these changes unless/until we figure out a way to adequately deal with the confusion & indirection resulting from moving the actual assertions out of the tests. I played with this for a few more hours this afternoon, and though I haven't got any more code to share, I do have some thoughts on how we might be able to really address the issue.

I think it's important to flag that the core problem here isn't that the above testing model is technically inferior to a model where we make assertions in the test methods itself. Most of the responses thus far appear to have made some version of that point, but it's worth highlighting because, if the problem boils down to a UI issue, then the logical next step is to look for UI-based solutions. More specifically, what the UI is missing in these tests is where we are in the overall test when a given assertion is being made. It's not that the information isn't accessible, just that our current logging/output method for the tests doesn't display it. So that's the problem I've tried to think about solving.

To make this work, we'd need to introduce some kind of meta-data layer into the tests themselves - something that can keep track of which method call within the overall test we're on, which/how many times we've run each of the individual 'unit' tests in the libraries, etc. As long as that meta-data layer is able to keep track of the overall progress that's being made, and we delegate the logging responsibility to that layer, then we can go from a flat log to a 'threaded' log - one that organizes the assertions into a hierarchy that corresponds to their relative position in the stack. All it'll take to get that info is some creative & careful use of debug_backtrace().

IMO, presenting the tests in a tree is actually more beneficial than being able to go directly to the individual line where you made the assertion. It might not be better for you if you're the one who wrote the test, but given that 98% of the people who might see that test are never going to come closer to your code than arm's length, it's much better. Instead of having to learn the specifics of what each and every test does, people will be able to learn what the subcomponents run from the libraries generally do. Armed with that knowledge, they'll be able to immediately grok the basic logic behind most tests they see (that use the libraries). That, as you said initially, makes this a lot more maintainable.

My ongoing thinking with this is that we'd want to implement a DrupalTestLibrary interface and probably a DrupalTestManager class/interface. The latter would be that metadata layer that handles calling tests, determining where/when/how to log the test results, etc. If we equip the test manager with some nifty Reflection API stuff, then we could probably make it even smarter about picking up meta-data on the test that it's currently running.

Oh and yeah - damn shame I

sdboyer's picture

Oh and yeah - damn shame I don't know ANY Java :P If I did, I'd probably be working on my Eclipse + Drupal integration dreams :)

Just replying as a bump -

sdboyer's picture

Just replying as a bump - interested to hear your thoughts on my response.

This test pattern can test more than you'd think

dlhubler's picture

if you write this in a file called BirdTest.php

<?php
require_once 'autorun.php';
class Bird {
  function fly() {
    return "away";
  }
}

class BirdTest extends UnitTestCase {

  function testFly() {
    $bird = new Bird();
    $this->assertEqual('home', $bird->fly());
  }
}

Now run (with simpletest in include_path)

php BirdTest.php

It shows

Content-type: text/html
BirdTest.php
1) Equal expectation fails at character 0 with [home] and [away] at [/home/dhubler/work/configuration-management/trunk/tests/BirdTest.php line 13]
  in testFly
in BirdTest
FAILURES!!!
Test cases run: 1/1, Passes: 0, Failures: 1, Exceptions: 0

I find this simple pattern serves me quite well for writing new code for Drupal. There is no need for a stack trace. I do realize it is not however very useful for testing existing code or testing the interaction with many drupal things.

How do you write unit tests for procedural code not designed for unit tests? Many ways, but each more difficult than the next. Your examples look great. Building libraries and such. Here's an idea I had along w/Aaron Stewball
http://code.google.com/p/php-mock-function/

Is there value in retrofitting unit tests on existing code, yeah some, i hear it opens the code freeze window for D7. It does find a few bugs. But in a lot of ways the code is already tested tho, manually by the community. Price has been paid. Tests are arguably more useful when code changes. So while you're changing code, rewrite it to make it more testable now that OOP is an option. May seem idealistic, but I like to think it is pragmatic: where to focus if time was limited.

Sorry I'm not commenting on your particular idea to come up with a backtrace/tree idea, I'm not too familiar with the inners of php.

Whoops, wrong button!

sdboyer's picture

Whoops, wrong button!

I agree that the assertions

boombatower's picture

I agree that the assertions in the test body help for readability, but from writing a number of tests I've come to find that there are lots of patterns in the tests where you need to access a page and make a number of assertions, change a variable, and repeat. Things like that can very easily be abstracted as I have done in several of the tests to something like.

<?php
changeVariable
();
assertionFunction();

changeVaraible();
assertionFunction();

function
assertionFunction() {
 
$this->drupalGet();
 
// make assertions
}
?>

The above example still allows the code to be readable by putting the drupal[Get|Post] in the same function, but at the same time creates reusable code that makes the main body of the test method more readable.

Would you be interested in

sdboyer's picture

Would you be interested in seeing me rewrite some of the existing D7 tests into this format so that we can do some side-by-side comparisons?

Sure.

boombatower's picture

Sure.

I believe the comment test is written in this manor.

Testing and Quality Assurance

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds: