Laboratories can be a taxing environment in which to implement Lean and Six Sigma. Labs are not the same as manufacturing – where Lean and Six Sigma got their start – because they typically have more variability in workload, less operational focus, less process reliability and longer task cycle times. However, through creative adaptation of the techniques it is possible for practitioners to deliver significant improvements in cost or speed.

This case study shows how a Lean Six Sigma project at a leading global pharmaceutical company managed to reduce lead times while improving productivity in the company’s quality control testing laboratory using the DMAIC approach.


Defining the goals of a Lean laboratory project may seem like a simple task, but that’s not always the case. The project goals can be the deciding factor in garnering support for the project from top management and from other employees. So the goals of the project should be chosen to mirror those of the business. The laboratory in this case study had goals of reducing the end-to-end cycle time of their products while keeping the cost per unit as low as possible. The goals of the lab’s DMAIC project reflected these overall site goals.

Tools such as Pareto charts and value stream maps are useful in deciding where the focus of a Lean laboratory project should be. A Pareto analysis of the incoming laboratory workload revealed that the majority of the workload (85 to 95 percent) was driven by three products: Products A, B and C (Figure 1).

Figure 1: Volume of Incoming Laboratory Workload by Product Type
Figure 1: Volume of Incoming Laboratory Workload by Product Type

Product A and C were from the same product family, received mostly the same tests and could be tested together at the same time. While Product B accounted for 19 percent of the sample volume, it did not account for 19 percent of the lab’s workload as it only required two very simple tests; in comparison, Products A and C received nine different tests. The project team decided to focus exclusively on A and C as they accounted for 80 to 90 percent of the lab’s workload and were the main priorities of the site.

The as-is process map revealed that a significant portion of the testing cycle time was spent on approval and release activities carried out after the batches were fully tested. As a result, the project team decided that approval activities would also be within the scope of the project.


During the Measure phase of the project, the team set out to establish valid reliable metrics to monitor progress toward the chosen goals. The lab already had in place metrics on cycle time. A look at the breakdown of cycle times for Product A showed a spread of times centred around 11 to 15 days, which corresponded to the lab’s target cycle time of 15 days. Sixty-six percent of samples either met the 15-day target time or were completed early, while 33 percent of samples were late. The average cycle time was 14.8 days (Figure 2).

Figure 2: Product A Cycle Times (January to April)
Figure 2: Product A Cycle Times (January to April)

Next, the project team considered how resources were used in the lab. It was immediately striking that the bulk of the resources in the lab were occupied by one test: Test x. Every one of sample type A and sample type C required this test and it was not possible to batch samples together; they had to be run individually. Also, the results of this test were required by a separate department in order for that department to proceed with their process. As a result, the laboratory heavily resourced this test with the aim of trying to test every sample every day. This was an inefficient tactic, as it resulted in variable numbers of samples being tested each day. For instance, five analysts might test 12 samples on one day and only test 4 the following day – representing a 67 percent drop in productivity from one day to the next. A strategy was required that would be consistently productive without adversely affecting cycle times. To do this, it would be necessary to control the number of samples tested each day.


The Analyze phase of the project looked at all the available data to determine the best way to move toward the desired goals of the project. The project team found that:

  • Each day the lab received between 1 and 17 samples, resulting in an average of 7 per day.
  • Weekly the lab received between 25 and 45 samples, resulting in an average of 36 per week.
  • The weekly incoming workload was much less volatile than the daily pattern (coefficient of variance 0.2 versus 0.6).

Therefore, although it was impossible to predict how many samples would arrive on a given day, it was possible to say with reasonable certainty that over the week the lab would receive approximately 36 samples. It was clear that it would be possible to have some level of control over the number of samples tested if a weekly testing pattern was developed due to the smaller weekly variation. Next, the team determined the time needed to complete each test (or takt rate). The number of samples for each test would be different as Product C received some tests that product A did not (for example, Ccntent uniformity) and vice versa. .

Having analyzed and reviewed all of the data, the team decided on a clear strategy. The lab would run:

  • A fixed, weekly repeating pattern of tests (known as a rhythm wheel).
  • Tests at the weekly average (i.e., the weekly takt rate).
  • Every test every week.
  • Tests of samples in first-in/first-out (FIFO) order.

In reality, the team had to pick a figure slightly above the average test number in order to cope with the expected weekly volatility, deliver acceptable lead times and account for failures/repeat testing. It was obvious that to follow this strategy some tests would have to be run more often. To ensure that productivity would not suffer, the team decided to reduce capacity for some tests (e.g. Test x) in order to reallocate those resources to increase capacity for other tests. Because a batch is only as fast as its slowest test, the end result would be to create more uniform overall cycle times for each of the tests.


To improve productivity and ensure consistent results, the team developed standard work for each of the testing roles. The team set about identifying:

  • The optimum number of samples for one analyst to test in one shift.
  • The best order in which to perform test activities.
  • Any improvements that could be made to the process.
  • Long periods of inactive time that could be used to run other short tests.
  • How many times to run the test each week.

Because the new pattern – the rhythm wheel – controls what tests occur each day, it removes much of the unpredictability and volatility that individual analysts experience in day-to-day testing. This provides consistent results, thus ensuring both productivity and shorter lead times.

There was, however, concern over what effect the rhythm wheel would have on lead times for Test x. The team agreed that they would model the outcome for this test before any changes were made. Using data from the previous six months of testing, the model showed that 49 percent of samples would have been tested the day they arrived, 31 percent the next day and the remainder after two days. This was deemed acceptable by all affected process owners.

Advantages of a rhythm wheel:

  • It was more productive than the old system, requiring only 40 full-time equivalent (FTE) shifts versus 54 FTE – a 26 percent improvement.
  • It removed the uncertainty around the equipment capacity and avoided equipment conflicts.
  • It removed a lot of the stress and scrambling from the daily testing routine for the analysts.
  • Every test was run every week to ensure consistent and short lead times.

To address the issue of the long approval and release activity wait times, as identified in Define, the process was reengineered to remove this delay by operating to the laboratory’s testing takt rate and reviewing every batch every day.

Once all the changes were implemented, average cycle times fell from 15 to 8 days. The overall laboratory headcount was reduced from 20 testing analysts to 15, a 25 percent productivity improvement.


The Control phase was initiated to ensure that the lead time and productivity gains established from the project would not be lost or eroded over time. To ensure that analysts knew exactly what was expected of them, the team designed set roles which clearly showed:

  • The activities required for the test role.
  • The best order in which to complete them.
  • Clear break targets.

The set roles were successful at sustaining the productivity within the laboratory.

The key performance indicators for the process were printed and posted weekly to show exactly how the lab’s cycle time was performing. There was a definite morale boost to the lab to see the lab performance consistently ahead of its targets. Before the project, 66 percent of samples were tested inside the 15-day target time. After the project was completed, the target was changed to 10 days, and all samples were consistently tested within the target time, with an average lead time of 8 days. This translated to an annualized 3.9-fold return on investment for the project.

About the Author