iSixSigma

Control Charting Continuous 2-Dimensional Data

Six Sigma – iSixSigma Forums Old Forums General Control Charting Continuous 2-Dimensional Data

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
    Posts
  • #30204

    Clint Smith
    Participant

    Most paper machines have scanners at the end of the process.  The scanners traverse the width of the machine (the CD, or cross direction) every minute or less, measuring such properties as unit weight and moisture content.  The paper travels past the gauge at 1000-5000 fpm in the MD (machine direction).  A spool is filled with paper to form a full “reel” every hour or so.  The gauging system provides numbers such as reel averages, reel CD 2-sigma, etc.  I’m looking for the best way to control chart the wealth of data from this on-line data.  I want to graphically capture variation in the MD (in time) and in the CD (in the profiles from front to back).

    0
    #78443

    Robert Butler
    Participant

      Based on your description I picture a system with a scanner moving vertically across a moving web so that the actual path of the trace is that of a diagonal scan of about a minute duration.  The question that remains is how does the scanner return to it’s initial position – shut off and quick return-resulting in a sawtooth scan, or continuous scanning resulting in a zig-zag pattern back and forth across the web?
      Either way, the structure of the data will result in multiple measurements down the web at the same X,Y position over time.  If your automatic data gathering records the X,Y location of the measurement you can take data and bin it across time for particular sets of X,Y coordinates. For the time (MD) data you will have to check for autocorrelation and identify the time interval needed for data independence if you wish to run a control chart in MD direction.  Frankly, for a first look I wouldn’t even think about a control chart. I’d just plot the data in meaningful ways and look at the results.  For example, for across web (CD) you could stratify by Y across time (we are assuming MD is the X direction) and look at boxplots of the data arranged by Y location.  given that you scan about every minute for about an hour you could also do the CD boxplots  by Y and by time grouping the data in terms of Y location for a particular time interval.  This would give  you a picture of changes Y over longer periods of time.  Time plots of this nature across multiple paper rolls will give you a picture of your process.  Once you have these pictures you can sit down and give some serious thought as to what they suggest about the process and what you might want to do next.
      Obviously this is a lot of plotting.  However, if you tape your sequential plots in time order on a wall so you can really “see” your process you will probably find the resulting pictures to be very interesting.
      I’ve done similar things with rolls of flocked material.  The resulting picture forced a complete rethinking of the process and what it was that we really wanted to with the data we had collected the the information that had been extracted.

    0
    #78454

    Clint Smith
    Participant

    It scans back and forth in a zig zag pattern.  The data collection is high frequency (60 Hz), is organized into CD (cross direction) bins of 1″-4″ width.  The CD profile from front to back is displayed graphically and is controlled to produce as flat a profile as possible.  But things happen to defeat the controls; the controllers can only move the profile so far to compensate for a non-uniformity upstream.  We tape sequencial plots together electronically, with the high’s, high-high’s, lows, and low-low’s color coded to show the high streaks and time swings.
    The system provides trends in time, the CD profiles, reel averages, and several variation numbers (CD 2-sigma, etc).  We need a way to quantify our current variation and to be able to look at historical plots to see when we’re stable and when we’re upset and if we’re improving over time.

    0
    #78507

    Carl H
    Participant

    Clint,
    Roberts thoughts on graphical analysis of the data are good and should give you quick info on the major thickness varaiton sources are.
    I have the same situation with a film production line. Some things I have done:
    1.  Try to charachterize thickness varation overall and in MD and CD components.  We took all “filtering” out for this purpose (we still filter for CD control). For total varaition, I computed SD on the >5000 CD/MD readings of a roll.  I took the average CD profile data and computed SD for this.  For MD variation I took two measures:  The SD of the average thickness for a roll of film (say 50 x 1 minute scans).  This was an estimate of MD variation throughout a roll.  Also took a high frequency MD trace (head stopped on sheet) to estimate short term MD variation.  Have not tried to reconcile the components of variation to the total yet….
    2.  We SPC chart (I-MR) the Average thickness readings at fixed frequency and respond to signals (typically at product startups and drifts during run).
    3.  We SPC chart the short term MD gage at some frequency.  If signals, then we look at source of (higher) variation.
    4.  We collect data but dont chart (yet) average CD gage variation by picking “worst” range over short CD distance (~5 inches).  This tends to be related to process nonuniformities and our CD control system.
    Note that items 2-4 affect our customers differently and are affected by some same/some different Xs, so we track them all independantly versus “total thickness variation”.  In any case, lower values for all of these variation sources is our goal.
    Sounds like we have some things in common.  Look forward to hearing your thoughts/progress.
    Regards,
    Carl

    0
    #78510

    Ron
    Member

    a Xbar and R chart is the best tool for this application.  Subgroup sampling becomes critical whenever utilizing sampling techniques.  How frequently can you obtain data sets?
    If you can obtain frequent readout every minute or so use an ImR.
     
    I like the XBar &R as it allows you to take measurements from several locations on the roll simultaneously then in the range chart you can determine if you have consistenanct across the roll and the XBar section will tell you if you have consitency from time to time.
     
     

    0
    #78560

    Marc Schaeffers
    Participant

    I would suggest to divide your process in lanes.
    Take XR charts per lane so the variation in time is stored in the XR chart and the variation in the width of your process can be found by comparing lanes.
    By showing 3 control charts to your operators (left, middle and right) youhave most of your process in control.
    With the amount of data you talk about you should consider either storing the information in Oracle or have a program which shows all information for the last X hours but stores information based on subgroups taken with a specific frequency.
    The stored information is for analysis of engineers, quality control people. The real time information is for operators.
    If you make sure the variations in the width of the process get the same date and time you can even make complete analysis in the width of the process. Several tools should give you this option.
    The number of lanes depends of your process and teh number of measurements taken over the width of your process.
    Kind regards,
     
    Marc Schaeffers
     
     

    0
    #78566

    Robert Butler
    Participant

    I think Clint has been given some excellent advice concerning the use of Xbar and R charts but I would caution that before he attempts to use these charts to track and perhaps control his process he should first make sure that the data points he extracts for purposes of plotting are independent of one another.  Given the frequency of sampling that he has described I think there is a very good chance that sequential data points are not independent.  If sequential data points exhibit significant autocorrelation the control limits extracted from such data will not reflect the true natural variability of the process.  As a result, the contol limits for Xbar and R will be too narrow and Clint will find himself reacting to “significant changes” in his process which really are not significant at all.  To check for autocorrelation take a block of data and do a time series analysis using the time series module in Minitab (assuming you have that package).  If significant autocorrelation exists you can use the results of your analysis to determine the proper selection of data for use in building an Xbar and R chart.

    0
Viewing 7 posts - 1 through 7 (of 7 total)

The forum ‘General’ is closed to new topics and replies.