Search News Articles

Recipe for a good test case

I have seen as many variations of test case design as I've seen projects. Naturally, the circumstances of any given project must be taken into consideration when creating test plans, test cases, and the testing infrastructure. In this article, we'll highlight the characteristics that are typically captured as part of a model test case, and the purpose that each characteristic serves. You should be able to start with this template, and then tailor your test case development effort as needed from there

  • Test ID. Uniquely identifies the test case.
  • Test Type. Identifies what kind of test case this is. For example, this field might be used to distinguish postive tests from negative tests, system tests versus business use case tests, functional tests versus performance tests, or any other categorization that is logical for your project.
  • Status. Identifies the status of this test case. This field can be used to help to manage the test case development efforts of your team. For example, a status of "Pending" might be used if the test case has been identified, but not fully documented. A status of "Completed" might be used if the test case has been documented, but not yet validated by the business subject matter expert or analyst. A status of "Validated" or "Active" might be used after the business SME/analyst has confirmed the test case.
  • Source Requirements. Identifies the business or design requirement(s) that are being tested by this test case. It is critical for your test cases to be linked back to your project requirements so that you can generate the necessary reports or dashboard views that allow you to confirm quality prior to a new release of your system.
  • Priority. The priority of the test case. Test cases with a "High" priority are likely to be earmarked for inclusion in your application smoke test - the core set of test cases that are run first on each new build of the application to quickly validate key functionality is stable. The test priority will general be derived from the priority of the source requirements as well as the impact on the business or system of a failure.
  • Owner. The test case owner, ultimately accountable for the accuracy of the test case and its conformance to requirements. This is generally a business subject matter expert or business analyst.
  • Expected User Roles. Identifies the user role or roles that are expected to perform the business functions relating to this test case once the system is live. Most test cases are written with a precondition that the user has logged into the application in the expected user role, or as a super user.
  • Test Objective. Summarizes the actions to be performed, and the result that the test will attempt to validate. This information is critical, and should be clearly stated in every test case you create. For a "Pending" test case, you might document your test objectives without providing test steps or expected results, to be used as a placeholder for future test development.
  • Preconditions. Conditions that the tester needs to confirm are true before beginning the execution of the test.
  • Test Data. Information about the data values that will be used during the text execution, either to be used as test inputs (parameters) or test outputs (expected results). The test data can be embedded directly in the test case definition, or can be located in a separate tab/spreadsheet to be attached to the test case definition.
  • Test Steps. List of individual atomic operations that the user must perform, using the system, in order to determine whether the expected result has been obtained.
  • Expected Results. One or more observable results that the tester is seeking to validate through the course of the test execution. Expected results might be provided on one or more of the individual test steps.

To learn more about WinMill Software's application development best practices and how we help make your project a success, contact us at inquiry@winmill.com or (888) 711-6455.