TUESDAY, MAY 22, 2018
Font Size
Topic SPC in Electronics Production

SPC in Electronics Production

Home Forums General Forums New to Lean Six Sigma SPC in Electronics Production

Tagged: ,

This topic contains 9 replies, has 4 voices, and was last updated by  Chris Seider 3 months, 1 week ago.

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #701857 Reply

    Hi.

    I’m working on a thesis for continuous improvement in produced electronics, and was hoping someone with SPC experience could help me out with a question I have.

    If you are using control charts to monitor your output, the data produced by the testers are not trivial. Each test could have hundreds of measurements ranging from simple to complex, and in addition a product is tested at various stages in the production cycle.

    I have found that average false alarms for Western Electric Rules are one per 92 measurement, and interpret from this that a high volume production line would generate massive amounts of false alarms if you monitor all processes.

    Are there any of the derived rules, Nelson, Juran etc that is considered more suitable for broader monitoring of electronics (not having to only pick a few important measurements or a few product samples) And how many false alarms would they generate? I found a blog claiming that SPC was obsolete due to the amount of data generated today, but by the comments provided it did not seem to be a common perception.

    A follow up question on SPC is since, according to Wikipedia, one of the key steps is to remove common cause variance; is this possible today considering the complexities of electronics?

    Thanks,

    #701869 Reply

    Michael is welcome to his opinion but his blog is such that he wins by virtue of the multiple bad things defense – there’s so much wrong with what he states that one does not know where to begin.

    As to your question – it is extremely broad and impossible to adequately address in a forum of this type, however I can offer a few observations.

    1. You state you have an average false alarm for Western Electric Rules are one per 92 measurements.
    a. The Western Electric rules assume you are looking at independent measures.
    b. The measures you are citing are not independent – they are repeated measures on the same object. What this means is that the confidence limits you have in the simulation(?) you have run are far too narrow and do not express the actual ordinary variation of the process. The end result is what you have reported – lots of false alarms.

    2. Your first big issue will be to address this lack of independence of measure and translate that into correct control limits for your process (this, by the way, is not trivial).

    As for your follow up question – no, the point of SPC is to help identify and remove special cause variation. You can’t eliminate common cause – if you could you would have a process with zero variation in output. If you actually developed such a process you would be first in line for a Nobel Prize in Physics.

    #701878 Reply

    SPC is great along with feedback control.

    #702291 Reply

    Robert is right on the money. I would also add that you shouldn’t use run rules for process control. They are handy for offline continuous improvement research, but they produce too many false alarms when used for real time process control. Another thing, if you have hundreds of variables you might want to consider using a data reduction technique such as principal components analysis to focus on the much smaller number of underlying factors rather than the variables themselves. It’s easy to get overwhelmed when looking at correlated variables and to lose sight of the forest for the trees.

    #702732 Reply

    Thank you @rbutler, very helpful.
    I’m not sure if I agree with you on “The measures you are citing are not independent – they are repeated measures on the same object.” Testing an electronics coponent, let’s say a TV, will happen on many different levels. At the early stage you test individual units, but later these are built into the final product. There may be similar measurements in this chain, but few identical. So testing one TV may lead to hundreds of measurements for each device, i-e not repeated. I assume the next one is to be treated as a new unit, so not on the same object. Am I misinterpreting what you say here?

    @tompyzdek, this might be what you are poinint towards, and where principal components analysis may help?

    #702747 Reply

    When I hear “repeated measurements” I’m thinking about multiple readings of the same property on a single item using the same measurement process. When this occurs then we are studying the measurement system, not the process that produced the item(s) being measured. Is that the case here, or am I misunderstanding the discussion topic?

    Regarding where PCA might help, I’m thinking here about multi-variate process control. This article should help http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc34.htm. Multi-variate process control involves plotting a small number of statistics (or even a single statistic) that is a composite of measurements of multiple features on each item in the sample. When the statistic falls outside of a control limit, the investigator has a bit of additional detective work to do to determine what caused the problem because she must consider that multiple variables have an impact on the multi-variate statistic. Despite this drawback multi-variate statistics may still be worth using because they have the potential to reduce hundreds of measurements to only 1-5 statistics (principal components). An additional benefit occurs when subject matter experts can make sense of the “loadings” of the principal components in real-world terms. For example, I’ve seen hundreds of positional measurements on a fabricated part reduced to 4 principal components that the engineers determined were associated with the 4 axes of movement of the CNC machine: X, Y, Z and rotation. This information proved extremely useful for process control and improvement.

    #702748 Reply

    If you are running multiple tests on the same sub-assembly and if these measurements are known to be physically independent of one another within that sub-assembly then, I agree, that would not be a situation of repeated measures, however, if the things being measured could affect one another then the measurements will be repeated and not independent.

    In addition to this there is another problem. On an assembly line there is no guarantee of independence of performance measurements from one unit to the next. The data will most likely be auto-correlated and it is likely that different independent measurements within a sub-assembly could have different auto-correlation structure across sub-assemblies.

    If you don’t identify and correct for measurement auto-correlation before running your control charts you will get a lot of false alarms.

    #702749 Reply

    @tompyzdek. Thanks, I’ll have a look at that. One could argue that all readings are independent. But they could still have some dependencies, a fault in one component for instance could affect others as @rbutler is on to. So if I understand this correct, if such dependancies are present then the number of false alarms for Western Electric Rules are a number lower than 92, where the real value will be dependant on the dependencies. But could also be increased based on measurement auto-correlation.

    Am I reading you correct Robert that you would need to identify each individual auto-correlation structure and compensate?

    #702789 Reply

    Repeated measures can mean multiple measures of the same thing within a single unit but it can also mean measures of things within a single unit where the measurements cannot be considered independent of one another.

    An example of this appeared on this forum many years ago. The poster had run a taste panel of cookies and had given X number of different cookie types to 20 different tasters and recorded their rating results from 1-10 for each cookie type. They weren’t getting any significant differences between the cookies and the reason they weren’t was because even though different things were being measured (cookie types) within a given taster the ratings of the different cookies within a given taster were not independent.

    This is also a problem in medicine. If you are taking measurements within a given patient that are known to be independent (height, weight, age) then repeated measures is not an issue. However, if you are running a analysis on say cytokine level and type these things can be related so if the model you are considering includes different cytokine types and levels you will need to identify the patient as the smallest unit of independence and run the analysis as repeated measures.

    When it comes to auto-correlation there are various ways to adjust the data to remove the effect of auto-correlation and generate correct confidence intervals. The simplest method is to do a time series analysis on each of the measurements made on a sub-assembly and graphically identify the point at which the auto-correlation for each particular measurement drops to 0. If you sample a unit the plots will tell you the number of units you will need to ignore (in time sequence) before you can sample another unit and have assurance that its performance is independent of the previously sampled item.

    In the instances where you are making multiple measurements of different aspects of a sampled unit you will need to run a time series analysis for each measurement. Once you have this, identify the measurement needing the largest separation between units before the requirement of independence is met and us that separation to define your sampling frequency with respect to control charting.

    In those cases where you have to test each sub-assembly what this means is that you record all of the data for each unit but you only use the data from every nth unit to generate your control chart(s).

    I’ve only done a couple PCA studies in my career and I thought they were useful. Unfortunately, in each case I was required to give physical meaning to each of the PC’s before using the method for process control. I couldn’t do this so the approach was rejected. The key thing to bear in mind is all this happened many years ago and it is quite likely that the engineering/management view of PC’s has changed since then. Consequently, I like @tompyzdek suggestion concerning PCA and I’d recommend checking that out as a possible solution to your problem.

    #702840 Reply

    @rbutler Interdependence is quite the complicated “qualification” Heck many processes have semblances of some relationship between consecutive samples but I’d hate to say one shouldn’t use SPC just because, for example, the sampling of something in a blood stream has been “impacted” by previous events. I know you’re not saying such a thing.

Viewing 10 posts - 1 through 10 (of 10 total)

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Reply To: SPC in Electronics Production
Your information:






5S and Lean eBooks

Six Sigma Online Certification: White, Yellow, Green and Black Belt

Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
GAGEpack for Quality Assurance
Find the Perfect Six Sigma Job

Login Form