You dont have javascript enabled! Please enable it! Guidance 063 – Quality Risk Management for Critical Instrument Calibration Pharmaceuticals quality assurance & validation procedures GMPSOP

Guidance 063 – Quality Risk Management for Critical Instrument Calibration

Quality Risk Management (QRM) Application:

Critical Instrument Calibration Interval Change via Risk Analysis

Introduction

This document offers a risk assessment approach to document a critical instrument calibration interval change request.

Non-Critical Instruments

Calibration frequencies for non-critical instruments, if any, can be adjusted by the maintenance team as appropriate based on instrument history and other factors. This practice has no impact to non-critical instrument interval change opportunities.

Critical Instruments

Calibration frequencies for critical instruments may be adjusted as necessary based on calibration data or other information that may support a change. Before extending calibration intervals, review the calibration history of the instrument based on the table below. Consider the results of the calibrations [e.g., Return to Service (RTS) limit exceeded, etc.] in the listed time window when modifying frequency.

Interval Change adjustmentConsecutive # of Most Recently Completed Calibrations (w/o)
From Weekly to Monthly12
From Monthly to Quarterly12
From Quarterly to Semi-Annually12
From Semi-Annually to Annually12

 

The interval change table above, which should be based on instrument history, is the primary method of determining opportunities for calibration interval changes.

Consideration should be given to the level of risk before making changes in calibration interval. For instruments that are considered to be minimal risk, an informal concise assessment is appropriate.

Where service requirements or other information indicates substantial risk associated with failure of a critical instrument, a more formal risk analysis can be used to confirm the calibration interval change.

The calibration risk evaluation should consider how a deviation reporting involving the

instrument might affect release of the product lots in question. The extent to which the instrument would impact the product is a good indicator of risk. A more conservative extension of the calibration interval can then be made, if appropriate.

Recommendations & Rationale for Recommendations

Risk Assessment Tool -Failure Mode and Effects Analysis (FMEA) is the tool of choice that is recommended for calibration interval change analysis. Its use enables identification of potential failure modes and assignment of numerical ranking using probability, severity and detectability of the risk (Tables I, II, III, respectively). Risk Assessment -Identification, analyses, and evaluation of potential risks

The impact of an instrument calibration failure from the standpoint of probability, severity, and detectability may be determined through the integration and factoring of multiple parameters associated with each criterion as illustrated in Tables I -III. This section will provide additional narrative description in support of the contents in each table which contain guidance on how these parameters can impact the risk of experiencing an out-of-tolerance (OOT) condition for an instrument.

Probability

The probability (or likelihood) of instrument failure may be attributed to:

a) design and construction,

b) the environment it is exposed to, and

c) how it is used.

Knowledge of the effects of design and construction can be gained through a review of the maintenance history of the instrument, comparing it to similarly designed instruments, and by knowing the age of the instrument (period of time in use). For each of these parameters, if the data and relevant information is not known, the risk should be assumed to be high.

The following criteria may be used to determine risk ranking for failure probability. Refer to Table I below.

1)  History – There are three (3) possible scenarios illustrated in table where instrument history may be used to determine risk ranking for failure probability.

(i) Availability of recorded history of an instrument in its current location,

(ii) Availability of history of identical instrumentation of the same make and model in the same area, and \

(iii) Availability of history of similar instrumentation in a similar environment. Risk ranking is determined by the length of recorded history available for an instrument, the number of available instruments for use in data gathering, and the typical interval between observed failures (mean time between failures, MTBF). When the number of instruments in place combined with the use history (e.g. >2 years) is sufficient to have observed most, if not all potential modes of failures (MTBF is long i.e., >24 months), the risk should be considered low.

The absence of historical records, lack of identical or similar instruments to benchmark, and if the MTBF is <24 months would indicate a higher risk. If there is less than 2 years of historical records, and the number of identical or similar instruments is considered less than sufficient, i.e., <3 and <10 for identical and similar instruments, respectively and the MTBF is >24 months, then the risk should be considered medium.

2) Environment – The environmental situation can be divided into sub-categories as illustrated in Table IFor purposes of risk assessment, the environmental sub-category with the highest risk determines the risk ranking for failure probability.

Due to the dynamic nature of a work environment a ranking cannot be based solely on one factor. Rather, all influences should be considered, for example – a transmitter located in a clean, dry area that does not get washed down but at the same time operates in an unstable environment – while the transmitter may be adequately protected and isolated from dirt and dust or exposure to wash down, the vibration and shock to which it is exposed can physically fatigue components that might cause erroneous reading. In this case, the sub-category of vibration and shock will determine the environmental risk ranking for failure probability.

3) Range of use – Instruments designed around transducers need to be used within their design range. Instruments are more linear and predictable in the middle of their range. Exposing instrument inputs that are outside of the design range may create conditions of non-linearity that are not readily apparent. Risk ranking is determined by the potential range excursions and exposure to conditions that may take the device to the edge of its range or beyond.

4) Age – The age of the instruments can be an indicator of the technology. Certain technologies are more prone to breakdown as they reach the end of their operational expectancy. Self correcting, digital instruments have only been around for a few years. Older analog instruments are subject to component aging, drift, and non-linearity. Additionally, older digital instruments may have firmware that is not current, or failing power supplies that do not allow for proper circuit performance. Risk ranking is determined by the instrument length of service; brand new (infant mortality) or very old (aging components) instruments have the highest risk.

Severity

There are several factors that may define the severity (or consequence) of instrument failure. The following brief narrative description for each factor will supplement the guidance provided in Table II below.

1) Human safety – Direct threat to human safety defines the most severe consequence of calibrations OOT. If an instrument reading (or alarm) is the main protection against severe or potentially fatal injury, such as breathing air, oxygen level, or lethal compounds monitor, then severity is potentially high. Depending on whether an instrument is a primary component in a safety system, or part of a redundant system, will determine the severity of this risk.

2) Environmental safety – Instruments that prevent or alarm on conditions of hazardous chemical release are examples of this risk. Whether these instruments are the sole indicator of environmental releases or an instrument has back-up or redundancy, will determine the relative severity of risk due to environmental safety.

3)GMP (or GxP) compliance – Typically, evaluation is made during Commissioning & Qualification to determine the GMP impact of systems and components. If an instrument’s performance is integral to demonstrating compliance with product specifications, then the risk severity is based on whether the data derived represents the sole measure of an attribute or whether the attribute is further assessed through another measure or test later in the process.

Instances wherein an attribute is further characterized by testing performed further down in the process may determine a lower severity ranking than instruments that determine compliance to specs without further verification.

4) Production impact – Yield and throughput can be optimized through reduction in production process variability as determined through instrument readings. If an instrument is determined to have an impact on production, then maintaining calibration accuracy is important and should be reflected in the severity ranking.

5) Cost – This represents the potential damage to machines or facilities that may result from an instrument or alarm reaching an OOT condition. The cost to repair or replace damaged assets may be avoided by maintaining instrument accuracy. It is a good practice to determine the effects of instrument OOT on potential damage to other assets.

Whether the calibration OOT causes additional expense relative to cost of repair or replacement of damage assets or its impact could be reduced through use of minor additional or other resources, will determine the severity risk ranking.

6) Energy consumption -One specific consequence of instrument OOT could be increased energy consumption. When machines are not operating optimally, frequently they require increased energy consumption. Examine the OOT consequence in light of increased energy consumption (requires additional heating, cooling, fuel, or electricity) to determine the appropriate severity ranking.

Detectability

Being able to immediately detect an instrument OOT condition may mitigate the impact of such

condition upon the system, process, or even the product to which it is associated or used. Immediate detection is determined by whether the system or process utilizing the instrument is automated, or manual, and whether there are other instruments or tell-tale parameters that occur as a direct result of incorrect instrumentation. Refer to Table III below. Systems or processes that are equipped with automation features or components that make it easier to detect OOT conditions should have a reduced risk in detectability ranking. Systems that have additional instruments or detectable parameters that are frequently observed/compared will enable timely identification of OOT conditions, thus resulting in lower risk.

Risk Acceptance:

Once the probability, severity, and detectability of instrument failure are individually assessed and agreement is reached on the risk associated with each instrument, a site should then define the level of risk it is willing to accept. The FMEA ranking criteria can be used to assign numerical ratings and complete the overall risk evaluation. See Table IV.

To assign the appropriate level of risk, a simple Low, Medium, High model with a corresponding numerical designation of 1, 2, and 3 will be used. Each criterion (probability, severity, detectability) can therefore have a numerical rating of 1, 2, or 3 that will determine the risk score. The risk score for each failure mode is obtained by multiplying the individual scores for each criterion.

For example:

Probability x Severity x Detectability = Risk Score

Recommended frequency for Instrument Calibration Intervals change – The frequency selected will all be relative to the risk score resulting from the assessment.

Low score will justify broader or less frequent calibration verification from the established guidance table.

High risk score will require adherence to the calibration table or perhaps Team review to tighter than published guidance.