TEST PLAN FORMAT
1. TEST PLAN.
1.1 Introduction.
This
section should summarise the software items and software features to be tested.
A justification of the need for testing may be included.
1.2 Test items. This
section should identify the test items. References to other software documents
should be supplied to provide information about what the test items are
supposed to do, how they work, and how they are operated. Test items should be
grouped according to release number when delivery is incremental.
1.3 Features to be tested.
This
section should identify all the features and combinations of features that are
to be tested. This may be done by referencing sections of requirements or
design documents. References should be precise yet economical, for example:
·
'the acceptance tests will cover all
requirements in the User Requirements Document except those identified in
Section 1.4';
·
'the unit tests will cover all modules
specified in the Detailed Design Document except those modules listed in
Section 1.4'.
1.4 Features not to be
tested. This
section should identify all the features and significant combinations of
features that are not to be tested, and why.
1.5 Approach. This
section should specify the major activities, methods (e.g. structured testing)
and tools that are to be used to test the designated groups of features. Activities
should be described in sufficient detail to allow identification of the major
testing tasks and estimation of the resources and time needed for the tests.
The coverage required should be specified.
1.6 Item pass/fail criteria.
This
section should specify the criteria to be used to decide whether each test item
has passed or failed testing.
1.7 Suspension criteria and
resumption requirements. This section should specify the criteria used
to suspend all, or a part of, the testing activities on the test items
associated with the plan. This section should specify the testing activities
that must be repeated when testing is resumed.
1.8 Test deliverables. This
section should identify the items that must be delivered before testing begins
and should identify the items that must be delivered when testing is finished.
1.8.1. Items to be delivered before testing.
·
test plan;
·
test designs;
·
test cases;
·
test procedures;
·
test input data;
·
test tools.
1.8.2. Items to be
delivered after testing.
·
test reports;
·
test output data;
·
problem reports.
1.9 Testing tasks. This section should
identify the set of tasks necessary to prepare for and perform testing. This
section should identify all inter-task dependencies and any special skills
required. Testing tasks should be grouped according to release number when
delivery is incremental.
1.10 Environmental needs. This
section should specify both the necessary and desired properties of the test
environment, including:
·
physical characteristics of the facilities
including hardware;
·
communications software;
·
system software;
·
mode of use (i.e. standalone, networked);
·
security;
·
test tools.
1.11 Responsibilities. This
section should identify the groups responsible for managing, designing,
preparing, executing, witnessing, and checking tests. Groups may include
developers, operations staff, user representatives, technical support staff,
data administration staff, independent verification and validation personnel
and quality assurance staff.
1.12 Staffing and training
needs. This
section should specify staffing needs according to skill. Identify training
options for providing necessary skills.
1.13 Schedule. This
section should include test milestones identified in the software project
schedule and all item delivery events, such as programmer delivers unit for
integration testing, developers deliver system for independent verification. This
section should specify:
·
any additional test milestones and state the
time required for each testing task;
·
the schedule for each testing task and test
milestone;
·
the period of use for all test resources
(e.g. facilities, tools, staff).
1.14 Risks and contingencies.
This
section should identify the high-risk assumptions of the test plan. It should
specify contingency plans for each.
1.15 Approvals. This
section should specify the names and titles of all persons who must approve
this plan. Alternatively, approvals may be shown on the title page of the plan.
2. TEST DESIGNS. (For
each plan, there can be ‘1…n..’ test designs).
2.n.1. Test Design
identifier. The title of this section should specify the
test design uniquely. The content of this section should briefly describe the
test design.
2.n.2. Features to be
tested. This section should identify the test items and describe
the features, and combinations of features, that are to be tested. For each
feature or feature combination, a reference to its associated requirements in
the item requirement specification (URD, SRD) or design description (ADD, DDD)
should be included.
2.n.3. Approach refinements.
The
description should provide the rationale for test-case selection and the
packaging of test cases into procedures. The method for analyzing test results
should be identified (e.g. compare with expected output, compare with old
results, proof of consistency etc). The tools required to support testing
should be identified. This section should describe the results of the
application of the methods described in the approach section of the test plan.
Specifically it may define the:
·
module assembly sequence (for unit testing);
·
paths through the module logic (for unit
testing);
·
component integration sequence (for
integration testing);
·
paths through the control flow (for
integration testing);
·
types of test (e.g. white-box, black-box,
performance, stress etc).
2.n.4. Test case identification.
This
section should list the test cases associated with the design and give a brief
description of each.
2.n.5. Feature pass/fail
criteria. This
section should specify the criteria to be used to decide whether the feature or
feature combination has passed or failed.
3. TEST CASE
SPECIFICATION.
3.n.1 Test Case identifier.
The
title of this section should specify the test case uniquely. The content of this
section should briefly describe the test case.
3.n.2 Test items. This
section should identify the test items. References to other software documents
should be supplied to help understand the purpose of the test items, how they
work and how they are operated.
3.n.3 Input specifications.
This
section should specify the inputs required to execute the test case. File
names, parameter values and user responses are possible types of input
specification. This section should not duplicate information held elsewhere
(e.g. in test data files).
3.n.4 Output specifications.
This
section should specify the outputs expected from executing the test case
relevant to deciding upon pass or failure. File names and system messages are
possible types of output specification. This section should not duplicate
information held elsewhere (e.g. in log files).
3.n.5 Environmental needs.
3.n.5.1 Hardware. This
section should specify the characteristics and configurations of
the
hardware required to execute this test case.
3.n.5.2 Software. This section should specify the system and
application software required to execute this test case.
3.n.5.3 Other. This
section should specify any other requirements such as special equipment or
specially trained personnel.
3.n.6 Special procedural
requirements. This section should describe any special constraints on
the test procedures that execute this test case.
3.n.7 Inter-case dependencies.
This
section should list the identifiers of test cases that must be executed before
this test case. The nature of the dependencies should be summarised.
4. TEST PROCEDURES.
4.n.1. Test Procedure identifier.
The
title of this section should specify the test procedure uniquely. This section
should reference the related test design.
4.n.2. Purpose. This
section should describe the purpose of this procedure. A reference for each
test case the test procedure uses should be given.
4.n.3. Special requirements.
This
section should identify any special requirements for the execution of this
procedure.
4.n.4. Procedure steps. This
section should include the steps described in the subsections below as
applicable.
4.n.4.1 Log. This
section should describe any special methods or formats for logging the results
of test execution, the incidents observed, and any other events pertinent to
the test.
4.n.4.2 Set up. This
section should describe the sequence of actions necessary to prepare for
execution of the procedure.
4.n.4.3 Start. This
section should describe the actions necessary to begin execution of the
procedure.
4.n.4.4 Proceed. This
section should describe the actions necessary during the execution of the
procedure.
4.n.4.5 Measure. This
section should describe how the test measurements will be made.
4.n.4.6 Shut down. This section should describe the
actions necessary to suspend testing when interruption is forced by unscheduled
events.
4.n.4.7 Restart. This
section should identify any procedural restart points and describe the actions
necessary to restart the procedure at each of these points.
4.n.4.8 Stop. This
section should describe the actions necessary to bring execution to an orderly
halt.
4.n.4.9 Wrap up. This
section should describe the actions necessary to terminate testing.
4.n.4.10 Contingencies. This section should describe the
actions necessary to deal with anomalous events that may occur during
execution.
5. TEST REPORTS.
5.n.1 Test Report identifier.
The
title of this section should specify the test report uniquely.
5.n.2 Description. This
section should identify the items being tested including their version numbers.
The attributes of the environment in which testing was conducted should be
identified.
5.n.3 Activity and event
entries. This
section should define the start and end time of each activity or event. The
author should be identified. One or more of the descriptions in the following
subsections should be included.
5.n.3.1 Execution
description. This section should identify the test procedure being
executed and supply a reference to its specification. The people who witnessed
each event should be identified.
5.n.3.2 Procedure
results. For
each execution, this section should record the visually observable results
(e.g. error messages generated, aborts and requests for operator action). The
location of any output, and the result of the test, should be recorded.
5.n.3.3 Environmental
information. This section should record any environmental conditions
specific for this entry, particularly deviations from the normal.
No comments:
Post a Comment