The Two Types of Tests
Software testing has a couple of important functions and these are different enough to form the basis of a taxonomy system of test cases. These functions include:
- Ensuring that a change to a library doesn't break something else
- Ensuring general usability by the intended audience
- Ensuring sane handling of errors
- Ensuring safe and secure operation under all circumstances
The first type of tests then are those which ensure that the behavior conforms to the designed outlines of the contract with downstream developers or users. This is what we may call "design-implementation testing."
The second type of tests are those which ensure that behavior outside the designed parameters is either appropriately documented or appropriately handled, and can be deployed and used in a safe and secure manner. This, generally, reduces to error testing.
These two types of tests are different enough they need to be written by different groups. The design-implementation tests are really best written by the engineers designing the software, and the error tests need to be handled by someone somewhat removed from that process.
Why Software Engineers Should Write Test Cases
Design-implementation tests are a formalization of the interface specification. As such a formalization the people best prepared to write good software contract tests are those specifying the software contracts, namely the software engineers.
There are a couple ways this can be done. Engineers can write quick pseudocode intended to document interfaces and test cases to define the contracts, or can develop a quick prototype with test cases before handing off to developers, or the engineers and the developers can be closely integrated. Either way the engineers are in the best position, knowledge-wise, to write test cases about whether the interface contracts are violated or not.
This works best with an initial short iteration cycle (regarding prototypes). However the full development could be on a much larger cycle so it is not necessarily limited to agile development environments.
Having the engineers write these sorts of test cases ensures that a few very basic principles are not violated:
- The tests do not test the internals of dependencies beyond necessity
- The tests focus on interface instead of implementation
Why You Still Need QA Folks Who Write Tests After the Fact
Interface and design-implementation tests are not enough. They cover very basic things, and ensure that correct operation will continue. However they don't generally cover error handling very well, nor do they cover security-critical questions very well.
For good error handling tests, you really need an outside set of eyes, not too deeply tied to current design or coding. It is easier for an outsider to spot that "user is an idiot" that was left in as a placeholder in an error message than it is for the developer or the engineer. Some of these can be reduced by cross-team review of changes as they come in.
A second problem is that to test security-sensitive failure modes, you really need someone who can think about how to break an interface, not just what it was designed to do. The more invested one is, brain-cycle-wise, in implementing the software, the harder it often is to see this.
Conclusion
Software testing is something which is best woven into the development process relatively deeply and should be both a before and after main development. Writing test cases is often harder than writing code, and this goes double for good test cases vs good code.
Now obviously there is a difference in testing SQL stored procedures than testing C code, and there may be cases where you can dispense to a small extent with some after-the-fact testing (particularly in declarative programming environments). After all, you don't have to test what you can prove, but you cannot prove that an existing contract will be maintained into the future.
No comments:
Post a Comment