iSixSigma

MSA – Is my scale accurately measuring my process?

Six Sigma – iSixSigma Forums General Forums Methodology MSA – Is my scale accurately measuring my process?

Viewing 16 posts - 1 through 16 (of 16 total)
  • Author
    Posts
  • #53397

    Henry
    Member

    I am using a table top scale to measure product in ounces. The scale goes to the hundreds place (0.00), but rounds up or down the last digit. How do I tell how light of a product that can be weighed and still make sure that my scale can detect a small difference. Example: One product is weighing between 3.0 and 4.2 oz. Is my scale going into enough decimal places to be able to see a normal distribution if there is one? Or will it always be non-normal because the scale is not capable of giving enough information?
    Thanks,
    Scott

    0
    #189993

    Cinnamond
    Participant

    Weight is a Normally distributed variable. This may or may not be hidden in your data depending on how variable your product is (as you have already found out). The question to focus on is whether or not the resolution you have on your gauge is sufficient to keep your process in control (you didn’t mention your spec limits). If you cannot do what you need to do because the resolution is not good enough, then get another gauge. Maybe the gauge will also measure in grams? You would have to do some conversions but…

    0
    #189994

    Mikel
    Member

    Weight is normally distributed?

    That is a pretty dumb statement. The variable’s distribution is going to be process dependent – I’ve seen hundreds of time where weight is bumping against a natural limit (container size for example) where the distribution was skewed.

    0
    #189995

    mcleod
    Member

    Donky, I appreciate the info. Stan,I don’t think insults were necessary. You are correct that distributions can be skewed by natural barriers. It is not the case in this situation.
    THanks.

    0
    #190014

    Henry
    Member

    I apologize if this is terribly obvious to some, but here goes again.
    I have multiple process that I want to make sure I am measuring correctly. The data currently shows it is not normal(400 samples) when run in an Anderson-Darling. I am not sure if I have enough resolution from my gauge/scale. is there a statistical tool/test that will tell me how much resolution I need to be able to trust my data?
    Also, if my data is non-normal, can I use it to make any assumptions about my process?

    0
    #190015

    Andrew Banks
    Participant

    Scott:

    “Accuracy” may not be what you are looking for.

    Check out the following link to Measurement System Analysis:

    https://www.isixsigma.com/index.php?option=com_k2&view=itemlist&task=category&id=75:measurement-systems-analysis-msa/gage-rr&Itemid=207

    Frist, scales are typically calibrated and certified by an outside agency that themselves use standard reference materials (typically traceable to NIST standards – picture a 1Kg ingot of metal stored in an environmentally-controlled vault) to ensure/declare the accuracy/linearity (with error +/-%) of your device. Does your scale have a calibration sticker that is current? Expired? This may be a 1st step. “Accuracy” is defined as being the bias between your system’s reading and the “true” value. This can only be determined by comparing your device to a known standard reference.

    Next, how did you select your parts? Do they represent the entire variation of the process? They need to.

    Third, a GR&R can help you determine if your measurement system has adequate discrimination to measure the VARIATION of your process (# of distinct categories). I have seen a couple thresholds used (>5, >10). It seems that this might be a design criteria decision (how well do you need to be able to measure/control variation)? The number of categories will help you answer your question about how well you can measure your processes with your measurement system. So you see – it isn’t the “weight”, but the distribution of weights of the parts your process produces that determines if the measurement system has adequate discrimination.

    MSA won’t “make” your data normally distributed if it isn’t. Most processes aren’t. By no means does that indicate that statistical techniques and process improvement are out the window. How you proceed is entirely dependent upon what your problem is…but at least you’ll go forward with a better knowledge of the accuracy, precision, repeatability and reproducibility of your process data.

    0
    #190018

    O’Connell
    Participant

    There is another mehotd, rather rule of thumb, is that the gage should read one more decimal than what the variation of the process is. So for example if the process variation is 0.01 than the gage should read upto the third decimal places. AIAG manual has accepted this method

    0
    #190020

    Craig
    Participant

    can you post your data?

    0
    #190062

    Henry
    Member

    I will do a GR&R, but I was looking for a tip like Brian’s.. Thanks!

    Upper Cl = 4.55
    Target = 4
    Lower Cl = 3.45

    4.3
    4.1
    4.6
    4.2
    4.1
    4.1
    4.38
    4.2
    4.02
    4.1
    4.08
    4.1
    3.98
    4.42
    3.92
    4.46
    4.1
    4.1
    4
    4.2
    4.05
    4.1
    4.2
    4.2
    4
    4
    3.98
    4
    4
    4.03
    4
    4.5
    4.01
    4.08
    4.8
    4.2
    4.2
    4.1
    4.5
    4.5
    4
    4.1
    4
    4.1
    4.4
    4.1
    4
    4.3
    3.9
    4
    4.2
    3.8
    3.9
    4.1
    4
    4
    4.2
    4
    4
    3.8
    3.8
    4.1
    4.3
    4.2
    4
    4.1
    4
    4.2
    4.1
    4.1
    4.13
    4.1
    4.3
    4.32
    4
    5
    4.3
    4.2
    4.3
    4.6
    4.1
    4.5
    4.2
    4.2
    4
    3.9
    4
    3.98
    4
    4.3
    4
    4.2
    4.1
    4.1
    4
    3.9
    4.52
    4.46
    4.06
    4.1
    4.14
    4.2
    4
    4
    4
    4.05
    4.3
    3.8
    4.38
    4.26
    4.12
    4.18
    4
    4.32
    4.08
    4.14
    4.46
    4.12
    4
    4.1
    4.2
    4.2
    4.15
    4.1
    4.36
    4.08
    3.96
    4.3
    4.1
    4.2
    4
    3.8
    4.2
    3.7
    4.1
    4.56
    4.02
    4.14
    4.16
    4.22
    4.26
    4
    4.28
    4.28
    4.1
    4.28
    4.21
    4.2
    4.28
    4.08
    3.96
    4.26
    4.16
    4.04
    4.28
    4.16
    4
    4.2
    4.4
    4.3
    4.16
    4.24
    4.18
    4.2
    4.34
    4.2
    4.14
    4.12
    4.12
    4.26
    4.1
    4.2
    4.1
    4
    4.1
    3.9
    4.3
    4.05
    4.1
    4.05
    4.1
    4.09
    4.05
    4
    4.05
    3.84
    3.9
    3.8
    4.1
    4.3
    3.9
    4.2
    4
    4.02
    3.8
    4.2
    4.3
    4
    4.24
    4.06
    4.1
    4.08
    4.06
    4.24
    4.06
    4
    4
    4.12
    4
    4.1
    4
    4
    4.58
    4.5
    4.14
    4.22
    4.28
    4
    4.32
    4
    4
    4.2
    4.1
    4
    4
    4.26
    4.18
    4.05
    4.42
    4.54
    4.11
    4.16
    4
    4
    4.1
    4.2
    4
    4.2
    4.07
    4.22
    4.14
    4.1
    4.28
    4.18
    4.14
    4.22
    3.94
    4.2
    4
    4
    4.2
    4.02
    4.1
    4.25
    4.12
    4.25
    4.02
    4.1
    4.02
    4
    3.9
    4.1
    4
    4
    4.2
    4
    4.2
    3.9
    4
    3.92

    0
    #190063

    Putnam
    Participant

    Just a thought. Your dataset has a ton of numbers, which is good. I’m assuming in every case where your number is truncated (4, 4.1, etc) it means that the end digit(s) are zero.

    Assuming a random part selection, why would more than 1 in 10 numbers have a terminal digit of zero? Why would more than just a very limited number of 4.00’s exist?

    It could be that you’ve got a data recording issue instead of a scale problem. That would explain the distribution challenge.

    Mike86

    0
    #190080

    Henry
    Member

    The previous data set was 5 samples an hour for eight hours over many shifts.
    This second set is consecutive samples within 5 minutes.
    The scale used for the second set was different. You will also notice that it has an annoying tendency to round up or down the last digit to a zero or five.
    What do you make of this? That is why I wondered if Grams would be a better unit to measure in.

    3.90
    3.55
    3.50
    3.50
    3.55
    3.50
    3.55
    3.55
    3.55
    3.55
    3.45
    3.65
    3.55
    3.55
    3.55
    3.55
    4.20
    4.15
    4.10
    4.20
    4.10
    3.65
    3.55
    3.55
    3.60
    3.55
    4.15
    4.00
    3.90
    3.80
    3.20
    4.25
    4.10
    4.35
    4.05
    4.25
    4.20
    4.25
    4.15
    4.30
    4.15
    4.25
    4.20
    4.25
    4.20
    3.15
    3.30
    3.35
    3.35
    3.45
    3.50
    3.40
    3.45
    3.45
    3.40
    3.40
    3.55
    3.45
    3.50
    3.55

    0
    #190091

    Putnam
    Participant

    I believe most scale manufacturers would tell you that have the last digit as 0 or 5 is a “feature”. What it means to you is that your data is only good to the first decimal.

    Your data is interesting. I’m assuming the order presented is the order produced and the order tested.

    Round the data to one digit (or you can truncate). Do a run chart of the adjusted data. My opinion, but you’ve got at least two populations here. Do the parts come from two different pieces of equipment or one unit with two or more heads?

    0
    #190097

    Craig
    Participant

    It looks like your data is bi-modal

    I’d say that is affecting your normality test moreso than rounding

    0
    #190142

    Henry
    Member

    This data was from a single head. It is possible that they could have switched lots of material. I will have to do more digging.

    ;)

    As the process runs through the day, and adjustments are made, the data will become muti-modal. I would assume that would cause you to gather tons of data to find any normality. I would also assume that the larger adjustments you have to make, the less normal your data would become?

    0
    #190146

    Craig
    Participant

    Sounds like an intrinsically non-normal variable. (due to tool wear?)
    The only way to expect normality is with real time feedback in the process were you correct for tool wear.
    Even with feedback / correction, you still might experience non normality if you are over-correcting.
    Going back to the question that intitiate this post:
    If your trying to find out if your scale is measuring accurately….this would call for a bias study
    If you are trying to quantify precision, do a GRR.

    0
    #190149

    Putnam
    Participant

    Another thought would be to use a multi-vari chart and monitor with small sample sets at multiple times during the day / shift. You could plot weight vs time and shift. This is often useful when there are cyclic effects on the data.

    You also need to notate when the setting changes are made on the equipment. These notations can be added to a run chart and are usually very helpful in sorting out this type of issue.

    Since the workers are adjusting the equipment, changes are obviously being introduced and it wouldn’t be at all surprising for the data to jump around a bit. If there’s a control that’s loose, maybe it can be set light and then slips or jumps to the heavy setting requiring further re-setting? Different operators with different opinions on “right”?

    Basically, record every possible thing (eg time, operator, shift, material, setting, etc.) that could be identified as different or changing over time and see what item(s) matches up with the changes.

    0
Viewing 16 posts - 1 through 16 (of 16 total)

You must be logged in to reply to this topic.