In an ideal world, any possible action you could perform as a Sails user—whether programatically or via the command-line tool—would have a test. However, the number of configuration variations in Sails, along with the fact that userland code can override just about any key piece of core, means we'll never quite get to this point. And that's okay.
Instead, the Sails project's goal is for any feature of Sails you might use—programatically or via the command-line tool—to have a test. In cases where these features are implemented within a dependency, the only tests for that feature exist within that dependency (e.g. Waterline, Skipper, and Captains Log). Even in these cases, though, tests in Sails inevitably end up retesting certain features that are already verified by Sails' dependencies, and there's nothing wrong with that.
We should strive to avoid tests which verify exclusivity: it cripples our ability to develop quickly. In other words, tests should not fail with the introduction of additive features.
For instance, if you're writing a test to check that the appropriate files have been created with sails new
, it would make sense to check for those files, but it would not make sense to ensure that ONLY those files were created (i.e. adding a new file should not break the tests).
Another example is a test which verifies the correctness of blueprint configuration, e.g. sails.config.blueprints.rest
. The test should check that blueprints behave properly with the rest
config enabled and disabled. We could change the configuration, add more controller-specific options, etc., and we'd only need to write new tests.
If, on the other hand, our strategy for testing the behavior of the blueprints involved evaluating the behavior and then making a judgement on what the config "should" look like, we'd have to modify the tests when we added new options. This may not sound like a big deal, but it can grow out of proportion quickly!