iSixSigma

Analyzing Experiments with Ordered Categorical Data

Six Sigma projects in various industries often deal with experiments whose outcomes are not continuous variable data, but ordered categorical data. Analysis of variables (ANOVA) is a technique used to analyze continuous experimental data, but is not adequate for analyzing categorical experimental outcomes. Fortunately, many other methods have been developed to deal with categorical experiments, such as Jeng and Guo’s weighted probability-scoring scheme (WPSS).

The WPSS technique is interpretable and easy to implement in a spreadsheet software program. The following case study, which involves medical devices, serves as an example of how a modified WPSS technique can be used to analyze experiments with ordered categorical data.

Determining the Best Factors

This study explores the influence of contact lens design factors on outcomes related to ease of lens insertion, meaning how easy it is to put patients’ contact lenses in their eyes. Soft contact lenses are thin pieces of plastic or glass that float on the tear film on the surface of the cornea. They are shaped to fit the user’s eye and are used to correct refractive errors such as nearsightedness, farsightedness and unequal curvature of the cornea (astigmatism). For this example, only three lens design factors of a certain lens type with fixed material properties are considered: lens thickness profile (3 levels), base curve dimension (3 levels) and base curve profile (2 levels). Determining the ease of insertion is a five-step process.

Step 1: Design an Experiment

Because this is an exploratory experiment, an L9 orthogonal matrix is used. The design matrix with the three lens design factors is shown in Table 1.

Table 1: L9 Orthogonal Matrix of Three Lens Design Factors

Design Factors

Experiment NumberThickness profileBase curve dimensionBase curve profile
1111
2122
3131
4212
5221
6231
7311
8321
9332

Step 2: Plan Number of Samples and Data Categorization

In small clinical trials, nine trained contact lens wearers are asked to try each of the nine lens designs from the L9 matrix and give their opinion on the ease of insertion. Each time a patient inserts a lens in their eye, they are asked to rate how easy it was to do. Their responses are integer numbers from 1 to 10, with the worst condition rated 1 (the patient cannot insert the lens) to the best condition rated 10 (the patient needs only one trial and the lens immediately sits on the right location of the eye). The ratings are grouped into four categories of ease of insertion:

  • Category I (very easy to insert): Ratings 9 – 10
  • Category II (easy to insert): Ratings 7 – 8
  • Category III (moderate to insert): Ratings 5 – 6
  • Category IV (difficult to insert): Ratings 1- 4
Handpicked Content:   DOE for Services: Right Training Makes It a Valuable Tool

The design matrix with the outcomes for each run is shown in Table 2.

Table 2: Insertion Ratings Grouped By Category

Design Factors

Number of Observation By Category
Experiment NumberThickness profileBase curve dimensionBase curve profileIIIIIIIVTotal
111112519
21223339
313142219
421222329
52214419
623113149
73115319
832125119
93324149

Step 3: Calculate Probability of the Outcomes Per Category and Run

In order to estimate the location and dispersion effects of each run, the scores of each category of each run must be transformed into probability values. Let i be an experiment run, for i = 1, 2,…I (in this example, I = 9) and j be a category of experimental outcomes, for j = I, II,…J (in this example J = IV). Then it is possible to calculate the probability (proportion) that an outcome is placed in j-th category of i-th run, i.e. pij, as the following:

pij = nij/si

where nij is the number of outcomes in j-th category of i-th run and si is the total outcomes of all categories in the i-th run.

For example, the probability of an outcome being placed in the III-th category of the 1st run is p1III = n1III/s1 = 5/9 = 0.56. The probability of the outcome in each category of each run is shown in Table 3.

Table 3: Probability of Outcomes

Number of Observation
By Categories

Probabilities for Each Category

Experiment NumberIIIIIIIVTotal(I)(II)(III)(IV)
1125190.110.220.560.11
233390.330.330.330.00
3422190.440.220.220.11
4223290.220.220.330.22
544190.440.440.110.00
6131490.110.330.110.44
753190.560.330.110.00
8251190.220.560.110.11
941490.440.110.440.00

Step 4: Estimate Location and Dispersion Effects of Each Run

Given each category j has a weight wj, which is the upper limit of the j-th category rate, the location scores Wi for the i-th run is defined by:

The rationale for using the upper limit of the category rate is that the weight should reflect the rating values. The dispersion score di2 is defined by:

where the target values are defined as {The upper limit of the I-st category rate, 0, 0, …, 0} for categories {I, II, III, … ,J}, respectively.

Handpicked Content:   Most Practical DOE Explained (with Template)

The rationale of setting the target values is that only outcomes that fall in the best category are rewarded. For example, the location and dispersion scores for the 1st run are W1 = 10*0.11 + 8*0.22 + 6*0.56 + 4*0.11 = 6.7 and d12 = [10*0.11 – 10]2 + [8*0.22 – 0]2 + [6*0.56 – 0]2+ [4*0.11 – 0]2 = 93.48. The location and dispersion scores of the outcomes of each run are shown in Table 4.

Table 4: Location, Dispersion and Mean Square Deviation Scores
Experiment NumberDesign Factor – Thickness ProfileDesign Factor – Base Curve DimensionDesign Factor – Base Curve ProfileLocation Scores (Wi)Dispersion Scores (di2)MSD
11116.793.50.16
21228.055.60.06
31318.036.00.04
42126.968.40.11
52218.744.00.04
62316.289.70.21
73118.927.30.03
83217.880.90.08
93328.038.80.04

One performance measure to combine location and dispersion effects is mean square deviation (MSD), which allows practitioners to make judgments in one step. If any outcome is the larger-the-better characteristic, then its expected MSD can be approximately expressed in terms of location and dispersion effects as follows:

For example, the expected MSD for 1st run is E[MSD]1 = 1/(6.67)^2 (1+ (3*93.5)/(6.67)^2) = 0.16. The MSD scores for all runs are given in Table 4.

The location, dispersion and expected MSD effects for each design factors are shown as Tmax-Tmin (Figures 1, 2, 3). Higher Tmax-Tmin values or steeper main effects curves indicate a stronger influence of that design factor on the outcomes.

Figure 1: Effects and Optimal Solutions for Location Scores

Figure 1: Effects and Optimal Solutions for Location Scores

Design Factors

Factor LevelsThickness profileBase curve dimensionBase curve profile
17.67.57.7
27.38.17.6
38.27.4Not available
Tmax – Tmin1.00.70.1
OptimalLevel 3Level 2Level 1
Figure 2: Effects and Optimal Solutions for Dispersion Scores

Figure 2: Effects and Optimal Solutions for Dispersion Scores

Design Factors

Factor LevelsThickness profileBase curve dimensionBase curve profile
161.763.161.9
267.460.154.3
349.054.8Not available
Tmax – Tmin18.48.27.6
OptimalLevel 3Level 3Level 2

 

 

Figure 3: Effects and Optimal Solutions for MSD Scores

Figure 3: Effects and Optimal Solutions for MSD Scores

Design Factors

Factor LevelsThickness profileBase curve dimensionBase curve profile
10.090.100.09
20.120.060.07
30.050.10Not available
Tmax – Tmin0.070.040.02
OptimalLevel 3Level 2Level 2

Step 5: Determine Optimal Solutions

The level of a particular design factor with the highest location value, the lowest dispersion value or the lowest expected MSD value is the optimal solution for each of those factors, respectively. The optimal solution based on the expected MSD criteria is the optimal trade-off between maximal location and minimal dispersion scores.

The predicted optimal solution based on the expected MSD criteria is thickness profile at level 3, base curve dimension at level 2 and base curve profile at level 2. But if practitioners know there are interaction effects among design factors, they cannot depend solely on the main effect values or plots to choose the settings of design factors. The interaction plot for the expected MSD effects shows that thickness profile heavily interacts with base curve level/dimension (Figure 4). A small interaction also exists between base curve dimension and base curve profile. After taking interaction effects into consideration, practitioners need to examine whether the chosen optimal design factor levels still give optimal effects to the experiment outcomes.

Figure 4: Interaction Plot of Thickness Profile, Base Curve Level/Dimension

Figure 4: Interaction Plot of Thickness Profile, Base Curve Level/Dimension

In this case, thickness profile at level 3 gives almost consistently the lowest MSD scores for different levels of base curve dimension and also consistently gives the lowest MSD scores for different levels of base curve profile. Thus, it gives the optimal effect to the experiment outcomes. Base curve dimension at level 2 almost consistently gives the lowest MSD scores for different levels of thickness profile and also consistently gives the lowest MSD score for different levels of base curve profile. Thus, it too gives the optimal effect to the experiment outcomes. The Tmax-Tminvalue of the base curve profile is the lowest and its curve is flat. Thus, base curve profile has insignificant influence on the outcomes, and can be set at either level 1 or 2. Therefore, the expected MSD predicts that lens design with thickness profile at level 3, base curve dimension at level 2 and base curve profile at either level 1 or 2 would give the optimal ease of insertion.

Easy to Implement Optimization Method

A modified WPSS is a simple and straightforward method for dealing with ordered categorical data. This case study shows that a single performance measure MSD derived from WPSS can provide insight to a system through experiments and can direct practitioners to the optimal solution.

Comments 6

  1. cristian

    I think that something is not OK as regarding MSD calculation!
    Please check the formula for MSD to fit the first run result (0,16)

    0
  2. Robert Butler

    Is there any chance you could show the raw data for the experiment? Specifically the 9 actual ratings for each of the design points? If not – how about re-running the analysis as a straight up repeated measures regression problem and show us how different the results of optmizing using the regression equation differ from the method shown. You would want to use the Box-Meyers method for determining dispersion.

    0
  3. Chris Seider

    Following up with @rbutler

    Have you done an MSA on the precision of your output measured for the experiment?

    0

Leave a Reply