You dont have javascript enabled! Please enable it! Guidance 055 – Documenting IQ, OQ, PQ Protocol Test Results for Equipment, Facility and Computer Pharmaceuticals quality assurance & validation procedures GMPSOP

Guidance 055 – Documenting IQ, OQ, PQ Protocol Test Results for Equipment, Facility and Computer

Introduction

This document sets out guidelines for documenting IQ/OQ/PQ protocol test results for equipment, facility, computer and computer-related systems.

Test results should be documented in a manner permitting objective pass/fail decisions to be reached. The following represents the objectives of good test documentation practices. The degree to which these are achieved should be based on the criticality of the function being tested.

  • Provide evidence that a function has been tested against predetermined specifications or acceptance criteria. Identify what was tested, by whom, and when. Provide traceability to approved requirements/specifications.
  • Specify the testing method, in sufficient detail (input data, test steps, test conditions, data collection etc) to allow the test to be re-executed, using equivalent conditions in the future or by a second independent tester.
  • Capture objective evidence of the test results, in sufficient detail to allow an independent assessment of the actual results against the acceptance criteria.

It is not good practice for the tester just to simply record a check mark (Pass/Fail), initials, or write “as expected” (or similar notation of acceptance) as the actual results without providing evidence of the result of the test step. Mark pass or fail especially when a reference or a numeric result should be recorded. Alternative documentation methods (e.g., the use of test keywords, codes, measured values or attachment references like screen prints, reports, etc.) may be utilized as long as the methodology is defined and provides unambiguous results.

Recommendations & Rationale

Although the format used for the test section of a validation/qualification protocol is largely dependant on the sites’ testing methodology and documentation practices, a typical script format is normally adopted for testing automated or non-automated systems and each test script could consist of the following elements:

  • Title: Identifies the test title.
  • Objective: Describes the objectives of the test.
  • Pre-requisites: Lists the test set-up and all items that are needed before the test can start.
  • References: Identifies the Requirement (s) and /or Specification(s), which will be verified by the test, to provide traceability.
  • Test Ref./Test Step Number/Test Procedure Number: A unique identification reference for each test step.
  • Instructions: Describes how to perform the test step and what printed evidence is required for documented evidence, or not acceptance criteria (expected results): Lists the set of pre-defined criteria that must be met for the test step to be deemed to have passed to support actual results.
  • Actual Results: The actual results generated from performing the test step instructions are recorded. The test script executor records the actual response of the system to the test. Where indicated, objective evidence (e.g. screen prints or reports) should be labeled with the protocol number, test procedure number, step number, screen print reference number, page number, and initials and date.
  • Pass / Fail: For each test step, “Pass” or “Fail” is recorded if the actual results do or do not meet the expected results. For a result to be signed off as “Pass”, every item in the Acceptance Criteria must have been successfully met.
  • Deviation Number. If test step deviations occur, record the deviation number(s).
  • Tested by Initials / Date: The initials of the person executing the tests steps and the date the test step was executed. A signature page detailing the full name and signature of all persons involved in test execution must be included in the protocol.
  • Comments: Any comments made during test execution will be documented.
  • Reviewed by: A person other than the person who executed the tests in a section will review each page of the test results, and the screen prints, and sign and date the “Reviewed By” field to verify the accuracy and completeness of the information.

Testing documentation should identify the “actual test result” observed during each test. There are a number of acceptable means of documenting the actual test result:

1. Re-writing the entire text and measured values identified in acceptance criteria as the “actual results”. It is not good practice for the tester just to simply record a check mark (Pass/Fail), initials or write “as expected” (or similar notation of acceptance) as the actual results without providing evidence of the result of the test step. It is allowable to mark pass or fail as long as evidence is provided, especially when a reference or a numeric result should be recorded. Without an actual observed value or a screen shot, the assurance that the actual result was observed is limited. For critical steps, printed evidence should be included in the test results. For non-critical steps, it is not necessary to capture a screen if a test is incidental to proving a user requirement. However, where evidence cannot be produced during the execution of a critical function step, the test step result may be recorded in the Actual Results column.

This general guidance represents more than what is required. Test results should be documented in a manner permitting objective pass/fail decisions to be reached. Testers are not required to write lengthy responses; it is acknowledged that doing so may introduce inaccuracies from the original observed value or result. So there needs to be sufficient information captured in actual results to make possible an effective comparison to expected results. Testing instructions can be used to identify how much information to capture for various types of testing.

In testing certain critical functions, it may be appropriate to completely rewrite the acceptance criteria (expected results) as the actual results. In these cases, the tester’s wording must be detailed enough to be able to provide an objective and reproducible recreation of events. It is not required that each test step have the actual result documented in the same manner; for example, the tester may simply write the observed result if that result is relatively brief, but may include printouts of more lengthy responses.

It is good practice and required for a tester to sign/initial, date, and record the protocol number, test procedure number, step number on each page of a report or screen print. This is to help ensure that all pages remain traceable to tests. If the generation of a lengthy report is being tested, it is acceptable to only document the first page of the report.

2. Entering only test keywords to signify successful completion of the test. In this approach, only test keywords (those highlighted in the expected results column) are documented in the actual results column. This documentation approach is recommended rather than recording the full test result.

3. Attaching references to signify successful completion of the test. In this approach, reference to relevant test attachments should be documented in the actual results column. For alarm messages, record the actual alarm message observed on the screen or provide a reference number (e.g., AM1, AM2, etc.) to a screen print containing the message text observed. When using a reference number, record the reference number on the screen print., initial and date the entry, to indicate that the message has been verified. The screen print or raw data should not be reproduced or retyped in any format as this constitutes formal test documentation. Refer to Appendix A for protocol examples of these documentation approaches.

Appendix A: Protocol Examples