Managing Out of Specification Result Investigations in Quality Control

Regulations are very sensitive as to how any out-of-specification laboratory test result is treated. Laboratories are required to have written procedures on the steps to take when any result does not meet specifications (generally known as OOS rules).

OOS rules require that any single result that does not meet specification must be investigated and not discarded without written justification or evidence that it was a genuine analyst error. In addition, simply averaging a failed result and passing result to obtain an average passing result could be interpreted as “testing into compliance”.

The quality of data results is dependent on two principles: 

Reporting of results and release approvals:

No data or test result shall be excluded from final calculations unless it can be shown that there is valid reason for exclusion, any exclusion must be documented with full justification and with the written approval of the QA Manager or Laboratory Manager. 

Investigation of out-of-specification (OOS) events:

All OOS events must be investigated and resolved in a timely manner. All investigations, conclusions, decisions and corrective actions must be documented and retained as part of the official laboratory records for that particular lot.

There are our categories of OOS events:

Category 1: Laboratory Error

Types of laboratory error include analyst errors, incorrect calculations, and malfunctioning equipment, use of incorrect standards or sample preparation, and mis-measurement. Confirmation that this was the cause of the OOS condition would not constitute a product failure.

Laboratory error should be relatively rare. Frequent errors suggest a problem that might be due to inadequate training of analysts, poorly maintained or improperly, calibrated equipment, or coreless work. Whenever a laboratory error is identified, the laboratory should determine the source of that error and take corrective action to prevent recurrence.

Category 2: Lack of Method Precision

In the original legal ruling regarding OOS conditions, this category not recognized, since it was assumed that all validated test methods should have adequate precision.

Individual results may fall outside the specifications by chance alone due to inherent variation within the assay. Confirmation that this was the cause of the OOS condition not constitutes a product failure. However, it would indicate that the was validated due to a lack of precision.

Category 3: Non Process related or Operator Error in Manufacturing

This category is concerned with human or mechanical errors that occur during manufacture, for failure to add a component, malfunction of equipment, or cross-contamination. Confirmation that this was the cause of the OOS condition wood constitute a failure with respect to that batch.

Category 4: Process or Manufacturing Problem

This category is concerned with process or manufacturing errors due to control over processes, for example incorrect mixing times heterogeneity of blends.

Confirmation that this was the cause of the OOS condition would constitute a failure within that particular lot, but may also mean that other lots may be impacted as potential failures.

Origin of OOS Events

In the early 1990s, the FDA inspected a major generic manufacturer in the US, and identified concerns with failing test results. In the FDA’s view, the company was not investigating OOS results thoroughly but merely re-testing the product into compliance.

The manufacturer challenged the FDA’s view, which ended up in the US District Court. The court ruled that any individual OOS event be investigated, and if a laboratory error cannot be identified, the batch will fail the test.

This case established the universal rules or principles for the treatment of OOS conditions:

  • Conduct an initial informal laboratory investigation.
  • If required, conduct a formal management investigation.
  • Ensure that proper documentation is maintained and corrective action is undertaken.

The FDA subsequently published a number of guidance documents regarding this matter. The current guidance provides clear requirements for how to investigate and resolve an OOS incident. The principles behind this guidance are used by all regulators.

Initial laboratory investigation

The exact cause of an OOS event can be difficult to determine specifically, and it is unrealistic to expect that analyst errors will always be documented. Nevertheless, a laboratory investigation must consist of more than a simple retest.

Simply retesting as a strategy raises three problems:

  • It is a regulatory requirement to conduct an investigation before any retests are conducted.
  • It may imply that there is a lack of control: the laboratory is not concerned as to the causes of possible failures, and therefore these conditions may recur on retest.
  • It may lead to discarding of the original result without invalidation

It is therefore essential that a laboratory predetermines its course of action the event of an OOS condition. This should be done by a combination of a SOP supported by an OOS Report Form. This approach minimizes errors and subjective decisions, and ensures the investigation is thorough and fully documented.

Responding to an OOS Event

These guidelines should appear in the SOP when investigating OOS events:

  • Any single OOS result must not be met with a simple retest as a means of investigation. Investigation must precede any decision to retest. A step-by-step review of the calculations, test method, notebooks, and instrument is required.
  • If it can be clearly established that the cause of the OOS is a laboratory error (Category 1), then the documentation may be limited to the individual testing record that was used to record the OOS and countersigned by supervision. In these circumstances, a formal management Investigation Is not warranted.

Formal Management Investigation

If the OOS cannot be invalidated by a laboratory investigation, or if there are multiple OOS conditions, a full scale formal inquiry should be conducted involving management, QA and QC personnel. Multiple OOS results would indicate that the cause of the problem is most likely to be either Category 3 or 4.

At this time, the investigation should extend beyond the laboratory to be production area. The batch record and any botch processing deviations should be reviewed in an attempt to formally identify the source of the OOS. Resampling at this point occur.

The formal investigation is conducted in order to identity process or non-process related errors, Ultimately, QA management will review the investigation, make conclusions and propose corrective actions.

Some Requirements for Expanded Investigations

Formal investigations extending the laboratory (Category 3 or 4) must follow an outline with particular to corrective action. The company must:

  1. State reason for the investigation
  2. Summation of the process sequences that may have caused the problem
  3. Outline corrective actions necessary to save the batch and prevent similar recurrence
  4. List other batches and products possibly affected, the result of investigation of these batches and products, and any action. Specifically:
  • examine other batches of product made by the errant employee or machine
  • examine products produced by the errant process or operation
  1. Preserve the comments and signatures of all production and quality control personnel who conducted the investigation and any reprocessed material after additional testing

OOS Documentation

One of the most important aspects of any investigation, whether it is informal in the laboratory or formal by management, is the requirement to fully document each step of the investigation and any conclusion found.

The documentation may be reviewed at some later stage by regulatory authorities to determine whether company is deliberately discarding or ignoring aberrant results.

The documentation trail must start once the OOS condition is detected and before any action is taken. The best way to control this is to establish clear rules and regulations for analyst, following up with support by training, an audit, and a standard investigation report form.

Corrective action steps and management decisions should also appear on the investigation record, and should include an assessment of whether any other tests have been compromised.

In summary, OOS documentation needed for on investigation should include:

  • An SOP. Which predetermine the laboratory’s course of action
  • A standard form for analyst investigation
  • A authorized investigation report
  • An OOS trend record/register

Analysts’ mistakes, such as undetected calculation errors, should be specified with particularity and supported by evidence. Investigations along with conclusions reached must be preserved with written documentation that enumerates each step of the investigation. The evaluation, conclusion and corrective action, if any, should be preserved in an Investigation or failure report and placed into a central file.

(Ref: FDA inspection Guide, Pharmaceutical Quality Control Labs (7/93)) 

The Investigation Report

A written record of the review should include the following Information.

  1. A clear statement of the reason for the Investigation.
  2. A summary of the aspects of the manufacturing process that may have caused the problem.
  3. The results of a documentation review, with the assignment of actual or probable cause.
  4. The results of a review made to determine if the problem has occurred previously.
  5. A description of corrective actions taken.

Retesting and Resampling

The actions resulting from an OOS investigation follow some important rules.

  • A retest is defined as additional testing on the same sample (from the same bottle of tablets or capsules and the same drum or mixer). Retesting is acceptable only after the investigation has commenced, and only if retesting is appropriate to investigate the OOS event.

Retesting cannot continue indefinitely. The retest plan should nominate the number of retests required. At the conclusion of retesting, a decision to either accept or reject the batch should be made. Additional retesting should not be conducted simply to ‘”test the product into compliance”. 

FDA Guidance:

  • A retest is acceptable if the review of the analyst’s work indicates an analyst’s error (Category 1). In this case, limited retesting is required, and the original result may be replaced by the retest result.
  • A retest is acceptable if the investigation is inconclusive as to whether the OOS Is Category 1 or 2, as the laboratory needs to find out whether the OOS result is an outlier or a reason to reject the batch.
  • Retest results should be used to supplement initial results in Category 2.
  • Retesting cannot continue indefinitely. The retest plan should nominate the number of retests required. At the conclusion of retesting, a decision to either accept or reject the batch should be made. Additional retesting should not be conducted simply to “test the product into compliance”.
  • Resampling is appropriate where provided for by an official monograph such as sterility testing, content uniformity and dissolution testing.
  • In the limited circumstances in which an OOS Investigation suggests that the original sample is unrepresentative, resampling is acceptable. Evidence, not mere suspicion, must support a resample designed to rule out preparation error in the first sample.

(Ref: FDA inspection Guide, Pharmaceutical Quality Control Labs (7/93))

Averaging of Final Results

In general, relying on the average figure without examining and explaining the individual OOS results is highly misleading, and is unacceptable to regulatory bodies.

Although averaging test data can be used to summarise results, laboratories should avoid the practice of only reporting averages, because averages hide the variability among individual test results. The reporting of final results should include all individual results and the average result. Individual results should be reviewed by management when releasing batches for sale.

Averaging is particularly troubling if testing generates both OOS and passing individual results which, when averaged, are within specification. This may occur in the case of Category 2 conditions.

There are both appropriate and inappropriate uses of averaging test data during original testing and during an OOS investigation.

(Note: In certain circumstances. principally for biological assays. individual monographs may allow the reporting of average results alone.)

Use of Averaging Test Data 

  1. Appropriate Uses

In some cases, a series of complete tests (full run-throughs of the test procedure), such as assays, are part of the test method. It may be appropriate to specify in the test method that the average of these multiple assays is considered one test and represents one reportable result. In this case, limits on acceptable variability among the individual assay results should be based on the known variability of the method and should also be specified in the test methodology. A set of assay results not meeting these limits should not be used.

  1. Inappropriate Uses:

Reliance on averaging has the disadvantage of hiding variability among individual test results. For this reason, all individual test results should normally be reported as separate values. Where averaging of separate tests is appropriately specified by the test method, a single averaged result can be reported as the final test result. In some cases, a statistical treatment of the variability of results is reported. For example, in a test for dosage form content uniformity, the standard deviation (or relative standard deviation) is reported with the individual unit dose test results.

Averaging can a so conceal variations in different portions of a batch, or within a sample. For example, the use of averages is inappropriate when performing powder blend/mixture uniformity or dosage form content uniformity determinations. In these cases, testing is intended to measure variability within the product, and individual results provide the information for such an evaluation.

(Ref: FDA Guidance for Industry, Investigating OOS Test Results for Pharmaceutical Production, 2006)

Examples of OOS Regulatory citations

Below are some actual audit deficiencies cited by regulators:

  • There was no investigation of temperature deviations during stability study.
  • OOS investigations failed to follow the retest procedure.
  • The conclusion of an OOS investigation was not supported by data or documentation.
  • The OOS results invalidated as caused by improper sample preparation were not supported by data or documentation.
  • The manufacturing process/raw materials/batch record history was not reviewed as required by the retest
  • The product was released after OOS result using grand average including in- and out-of-specification results.

Internal laboratory audits 

Good internal auditing practices means establishing a cooperative focus on “improvement” to the levels of compliance rather than just issue a list of non-conformances.

In order to do this, the audited department should be open and helpful to the audit group, and the audit group should be constructive in any criticisms. After all, the purpose is to ensure that the laboratory meets acceptance compliance standards: It’s in everyone’s interests. 

An essential part of any laboratory operating with a laboratory quality management system is the need to conduct audits as part of a QA program.  This is required by ISO 17025 and regulatory agencies.

Audits are conducted to verify good procedures and practices, and to identify problems and opportunities for improvement.

Laboratories should consider:

  • Quality or compliance audits
  • Regular monitoring of test performances
  • Quality system reviews

Credibility of the QC Laboratory:

The credibility of the QC laboratory hinges on good quality management.

Documentation and records maintained in the laboratory should be able to withstand auditing. Specifically, internal audits aim to verify that:

  • Results are reliable and accurate
  • The information flow to customers are timely
  • Test results are unambiguous
  • The laboratory is economically efficient
  • Historical records and data are retrievable
  • The laboratory operates independently in quality assessment

Be aware that loss of credibility results in:

  • Loss of confidence and skepticism by users
  • Inappropriate resource utilization, leading to excessive costs
  • A focus on ‘defending” results
  • Rejection of good and acceptance of bad quality
  • An isolation of the laboratory
  • Undue regulatory or external pressure on the company

Audits and preparation

The following audits may be conducted:

  • Safety
  • Compliance
  • Regulatory
  • Investigation
  • Trend Analysis

When you are being audited:

  • Be prepared.
  • Be positive.
  • Stick to the facts.
  • Don’t offer personal opinions.
  • Don’t offer irrelevant information.

You will most likely have to show you laboratory notebooks, test methods, and “objective evidence” on how you conducted a test or calculated a result. Have all the documentation that might be needed readily available. You may be asked because you are either an expert in the area, or have particular knowledge of an issue or event.

External Audits

External audits are generally conducted by purchasers, contractors, and accrediting bodies.

Generally, an external audit will include at least the following areas:

  • The laboratory quality management system
  • The written procedures
  • The validation of test methods
  • The control over instrumentation and equipment
  • The test methods and how they are applied
  • The laboratory record keeping practices
  • The procedures for evaluation and release of results