There is usually a strong belief that a validated analytical method, either partly automated or fully manual is simply perfect and cannot generate any defect since it complies with highly demanding regulations.

Thus, any apparent variation in a production process is automatically attributed to the manufacturing process or raw material suppliers. The latter being also validated according to the same regulations it then becomes very costly and time consuming to implement any change with the risk of not seeing any improvement.

Two typical cases from two major pharmaceutical companies show where the challenge was to admit variation in a validated analytical method due to insufficient measurement procedures.

Answering a Cultural Question

The Issue: Can Japanese customers perceive defects that are invisible to western customers?

A world class pharmaceutical company with market leadership in the Japanese market was recently confronted with the harmonization of its supply chain. The biggest challenge was in Japan where manual inspection of tablets still involved several sub-contractors leading to incremental cost and delay in the supply of drugs to patients.

The approach: The company decided to follow the Six Sigma methodology to address the belief that Japanese customers would see defects that are invisible to Europeans, and therefore it would still need to keep the manual inspection of tablets.

A transnational team was formed and started rapidly to work on the baseline, collecting data on reported number of defects per batch to better define the problem and complete their project charter.

An attribute Gage R&R was carried out to determine the variability of the manual visual inspection of defects, and assess its reliability. The team picked from a selection of screened tablets, 80 samples and placed them into transparent plastic frames protecting them from damage. The frames were then sent to 15 different hospitals in Japan where the local pharmacist was asked to identify any defective tablet, mark it and describe each identified defect which would be unacceptable for them and their patients.

All the tablets used in the test where already inspected by a well established sub-contractor with decades of experience in manual visual inspection of pharmaceutical products. Furthermore, production and quality control operators where asked to check all 80 tablets three times to assess the in-house repeatability of the test.

The result was dramatic with only 11 percent agreement between and within operators and the contractor. A reliable test would have shown a minimum of 90 percent agreement among all the inspectors of the sample set. Interestingly, Japanese customers agreed at the end on only three tablets as being really defective and all the other ones were just identified randomly as being defective.

For the first time at this company, the unreliability of manual visual inspection was demonstrated.

The solution to this problem was the implementation of a standard visual inspection machine using linear digital cameras to screen tablets at high speed. The level of sensitivity was adjusted to satisfy any Japanese client at the same time it reduced significantly the level of reject by the company compared to a manual inspection.

Problem of Batch Deviation

The Issue: Batch deviation problems at a biotech company leading to stock outs and destruction of high value product.

The production of proteins is known for its extremely long cycle times and is subject to deviations which need to be investigated before the batch release. But when the batches at the end of the production process tend to be more and more borderline and frequently out of specification, one can be tempted to redesign the process and look at the incoming raw materials quality.

Again the question was about the variability of the measurement system. The company was using high performance liquid chromatography (HPLC) analysis – a method for analyzing compounds in solutions. A check was made to see whether the test was capable of detecting variability within the range of the process variation. The measurement system had been validated against the total specification range but was it also capable of detecting a relatively small variation of the process? This question needed to be answered before considering any process modification. The answer was clearly negative. The answer to the deviation problem was in the test method itself.

The belief was because the method is validated, it is stable over time and it does not allow any variation. This is true to a certain extend but what about the sample preparation? How much room is left to the operators when considering agitation time, waiting time to reach equilibrium in the vials, cleanliness of the glassware, etc.? The result here was the identification of new factors influencing the results of the analysis and they were not standardized in the method.

The validation of method should be looked at differently following the empirical method of design of experiments, instead of relying mainly on expertise or previous experience with similar types of analysis. The outcome is definitely a more robust test method where main effects and interactions are clearly evaluated and controlled.

The result of this second example was a simple and inexpensive solution with the standardization of the sample preparation and saved this company several millions of Euros in cost of goods destroyed.

About the Author